Saturday, August 03, 2013

The Spider and the Fly

Michael Lewis has written a riveting report on the trial, incarceration, release, and re-arrest of Sergey Aleynikov, once a star programmer at Goldman Sachs. It's a tale of a corporation coming down with all its might on a former employee who, when all is said an done, damaged the company only by deciding to take his prodigious talents elsewhere.

As is always the case with Lewis, the narrative is brightly lit while the economic insights lie half-concealed in the penumbra of his prose. In this case he manages to shed light on the enormous divergence between the private and social costs of high frequency trading, as well as the madness of an intellectual property regime in which open-source code routinely finds its way into products that are then walled off from the public domain, violating the spirit if not the letter of the original open licenses.

Aleynikov was hired by Goldman to help improve its relatively weak position in what is rather euphemistically called the market-making business. In principle, this is the business of offering quotes on both sides of an asset market in order that investors wishing to buy or sell will find willing counterparties. It was once a protected oligopoly in which specialists and dealers made money on substantial spreads between bid and ask prices, in return for which they provided some measure of price continuity.

But these spreads have vanished over the past decade or so as the original market makers have been displaced by firms using algorithms to implement trading strategies that rely on rapid responses to incoming market data. The strategies are characterized by extremely short holding periods, limited intraday directional exposure, and very high volume. A key point in the transition was the adoption in 2007 of Regulation NMS (National Market System), which required that orders be routed to the exchange offering the best available price. This led to a proliferation of trading venues, since order flow could be attracted by price alone. Lewis describes the transition thus:
For reasons not entirely obvious... the new rule stimulated a huge amount of stock-market trading. Much of the new volume was generated not by old-fashioned investors but by extremely fast computers controlled by high-frequency-trading firms... Essentially, the more places there were to trade stocks, the greater the opportunity there was for high-frequency traders to interpose themselves between buyers on one exchange and sellers on another. This was perverse. The initial promise of computer technology was to remove the intermediary from the financial market, or at least reduce the amount he could scalp from that market. The reality has turned out to be a boom in financial intermediation and an estimated take for Wall Street of somewhere between $10 and $20 billion a year, depending on whose estimates you wish to believe. As high-frequency-trading firms aren’t required to disclose their profits... no one really knows just how much money is being made. But when a single high-frequency trader is paid $75 million in cash for a single year of trading (as was Misha Malyshev in 2008, when he worked at Citadel) and then quits because he is “dissatisfied,” a new beast is afoot. 
The combination of new market rules and new technology was turning the stock market into, in effect, a war of robots. The robots were absurdly fast: they could execute tens of thousands of stock-market transactions in the time it took a human trader to blink his eye. The games they played were often complicated, but one aspect of them was simple and clear: the faster the robot, the more likely it was to make money at the expense of the relative sloth of others in the market.
This last point is not quite right: speed alone can't get you very far unless you have an effective trading strategy. Knight Capital managed to lose almost a half billion dollars in less than an hour not because their algorithms were slow but because they did not faithfully execute the intended strategy. But what makes a strategy effective? The key, as Andrei Kirilenko and his co-authors discovered in their study of transaction-level data from the S&P E-mini futures market, is predictive power:
High Frequency Traders effectively predict and react to price changes... [they] are consistently profitable although they never accumulate a large net position... HFTs appear to trade in the same direction as the contemporaneous price and prices of the past five seconds. In other words, they buy... if the immediate prices are rising. However, after about ten seconds, they appear to reverse the direction of their trading... possibly due to their speed advantage or superior ability to predict price changes, HFTs are able to buy right as the prices are about to increase... They do not hold positions over long periods of time and revert to their target inventory level quickly... HFTs very quickly reduce their inventories by submitting marketable orders. They also aggressively trade when prices are about to change. 
Aleynikov was hired to speed up Goldman's systems, but he was largely unaware of (and seemed genuinely uninterested in) the details of their trading strategies. Here's Lewis again:
Oddly, he found his job more interesting than the stock-market trading he was enabling. “I think the engineering problems are much more interesting than the business problems,” he says... He understood that Goldman’s quants were forever dreaming up new trading strategies, in the form of algorithms, for the robots to execute, and that these traders were meant to be extremely shrewd. He grasped further that “all their algorithms are premised on some sort of prediction—predicting something one second into the future.”
Effective prediction of price movements, even over such very short horizons, is not an easy task. It is essentially a problem of information extraction, based on rapid processing of incoming market data. The important point is that this information would have found its way into prices sooner or later in any case. By anticipating the process by a fraction of a second, the new market makers are able to generate a great deal of private value. But they are not responsible for the informational content of prices, and their profits, as well as the substantial cost of their operations, therefore must come at the expense of those investors who are actually trading on fundamental information.

It is commonly argued that high frequency trading benefits institutional and retail investors because it has resulted in a sharp decline in bid-ask spreads. But this spread is a highly imperfect measure of the value to investors of the change in regime. What matters, especially for institutional investors placing large orders based on fundamental research, is not the marginal price at which the first few shares trade but the average price over the entire transaction. And if their private information is effectively extracted early in this process, the price impact of their activity will be greater, and price volatility will be higher in general.

After all, it was a large order from an institutional investor in the S&P futures market that triggered the flash crash, sending indexes plummeting briefly, and individual securities trading at absurd prices. Accenture traded for a penny on the way down, and Sotheby's for a hundred thousand dollars a share on the bounce back.

In evaluating the impact on investors of the change in market microstructure, it is worth keeping in mind Bogle's Law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
This is just basic accounting, but often overlooked. If one wants to argue that the new organization of markets has been beneficial to investors, one needs to make the case that the costs of financial intermediation in the aggregate have gone down. Smaller bid-ask spreads have to be balanced against the massive increase in volume, the profits of the new market makers, and most importantly, the costs of high-frequency trading. These include nontrivial payments to highly skilled programmers and quants, as well as the costs of infrastructure, equipment, and energy. Lewis notes that the "top high-frequency-trading firms chuck out their old gear and buy new stuff every few months," but these costs probably pale in comparison with those of cables facilitating rapid transmission across large distances and the more mundane costs of cooling systems. All told, it is far from clear that the costs of financial intermediation have fallen in the aggregate.

This post is already too long, but I'd like to briefly mention a quite different point that emerges from the Lewis article since it relates to a theme previously explored on this blog. Aleynikov relied routinely on open-source code, which he modified and improved to meet the needs of the company. It is customary, if not mandatory, for these improvements to be released back into the public domain for use by others. But his attempts to do so were blocked:
Serge quickly discovered, to his surprise, that Goldman had a one-way relationship with open source. They took huge amounts of free software off the Web, but they did not return it after he had modified it, even when his modifications were very slight and of general rather than financial use. “Once I took some open-source components, repackaged them to come up with a component that was not even used at Goldman Sachs,” he says. “It was basically a way to make two computers look like one, so if one went down the other could jump in and perform the task.” He described the pleasure of his innovation this way: “It created something out of chaos. When you create something out of chaos, essentially, you reduce the entropy in the world.” He went to his boss, a fellow named Adam Schlesinger, and asked if he could release it back into open source, as was his inclination. “He said it was now Goldman’s property,” recalls Serge. “He was quite tense. When I mentioned it, it was very close to bonus time. And he didn’t want any disturbances.” 
Open source was an idea that depended on collaboration and sharing, and Serge had a long history of contributing to it. He didn’t fully understand how Goldman could think it was O.K. to benefit so greatly from the work of others and then behave so selfishly toward them... But from then on, on instructions from Schlesinger, he treated everything on Goldman Sachs’s servers, even if it had just been transferred there from open source, as Goldman Sachs’s property. (At Serge’s trial Kevin Marino, his lawyer, flashed two pages of computer code: the original, with its open-source license on top, and a replica, with the open-source license stripped off and replaced by the Goldman Sachs license.)
This unwillingness to refresh the reservoir of ideas from which one drinks may be good for the firm but is clearly bad for the economy. As Michele Boldin and David Levine have strenuously argued, the rate of innovation in the software industry was dramatic prior to 1981 (before which software could not be patented):
What about the graphical user interfaces, the widgets such as buttons and icons, the compilers, assemblers, linked lists, object oriented programs, databases, search algorithms, font displays, word processing, computer languages – all the vast array of algorithms and methods that go into even the simplest modern program? ... Each and every one of these key innovations occurred prior to 1981 and so occurred without the benefit of patent protection. Not only that, had all these bits and pieces of computer programs been patented, as they certainly would have in the current regime, far from being enhanced, progress in the software industry would never have taken place. According to Bill Gates – hardly your radical communist or utopist – “If people had understood how patents would be granted when most of today's ideas were invented, and had taken out patents, the industry would be at a complete standstill today.”
Vigorous innovation in open source development continues under the current system, but relies on a willingness to give back on the part of those who benefit from it, even if they are not legally mandated to do so. Aleynikov's natural instincts to reciprocate were blocked by his employer for reasons that are easy to understand but very difficult to sympathize with.

Lewis concludes his piece by reflecting on Goldman's motives:
The real mystery, to the insiders, wasn’t why Serge had done what he had done. It was why Goldman Sachs had done what it had done. Why on earth call the F.B.I.? Why coach your employees to say what they need to say on a witness stand to maximize the possibility of sending him to prison? Why exploit the ignorance of both the general public and the legal system about complex financial matters to punish this one little guy? Why must the spider always eat the fly?
The answer to this, I think, is contained in the company's response to Lewis, which is now appended to the article. The statement is impersonal, stern, vague and legalistic. It quotes an appeals court that overturned the verdict in a manner that suggests support for Goldman's position. Like the actions of the proverbial spider, it's a reflex, unconstrained by reflection or self-examination. Even if the management's primary fiduciary duty is to protect the interests of shareholders, this really does seem like a very shortsighted way to proceed.

---

Update (August 6). RT Leuchtkafer, whose writing has been featured in several earlier posts, sends in the following by email (posted with permission):
I'd add the task for HFT shops is more than information extraction in short timeframes - they've expanded that task to be one of coaxing information leakage from the exchanges, for which they pay the exchanges handsomely.

On intermediation and its costs, intermediary participation in the equities markets has easily tripled since the HFT innovation (and the deregulation of intermediation), and so on net I've argued that aggregate position intermediation costs have gone up even as per share costs have gone down.  Intermediaries make much less on a share than they used to but thanks to deregulation they interpose themselves between natural buyers and sellers much more often than they did, with the result that even though portfolio implementation costs have gone down the portion of those costs captured by intermediaries has greatly increased. 
In addition, Steve Waldman has pointed out that the costs of defensive expenditures to counter HFT strategies are also subject to Bogle's Law and need to be accounted for. For some vivid examples see this post by Jason Voss (via Themis Trading).

Responses to this post on Economist's View and Naked Capitalism are also worth a look; I especially recommend the discussion of open source following this comment by Brooklin Bridge.

Thursday, April 25, 2013

Macon Money

Among the many fascinating people currently affiliated with the Microsoft Research New York lab is Kati London, judged by MIT's Technology Review Magazine (2010) to be among the “Top 35 Innovators Under 35.” Through her involvement with the start-up area/code, Kati has developed games that transform the individuals who play them and the communities in which they reside.

One such project is Macon Money, an initiative involving the Knight Foundation and the College Hill Alliance in Macon, Georgia. This simple experiment, amazingly enough, sheds light on some fundamental questions in monetary economics, helps explain why conventional monetary policy via asset purchases has recently been so ineffective in stimulating the economy, suggests alternative approaches that might be substantially more effective, and speaks to the feasibility of the Chicago Plan (originally advanced by Henry Simons and Irving Fisher, and recently endorsed by a couple of IMF economists) to abolish privately issued money.

So what exactly was the Macon Money project? It began with a grant of $65,000 by the Knight Foundation, which was used to back the issue of bonds. These bonds were (literally) sliced in two and the halves were given away through various channels to residents of Macon. If a pair of individuals holding halves of the same bond could find each other, they were able to exchange the (now complete) bond for Macon Money, which could then be used to make expenditures at a variety of local businesses. These business were happy to accept Macon Money because it could be redeemed at par for US currency.

The basic idea is described here:


The demographics of the participant population, the distribution of expenditures, and the strategies used by players to find their "other halves" are all described in an evaluation summary. The project had the twin goals of building social capital and stimulating economic development. Although few enduring ties were created among the players, participation did create a sense of excitement about Macon and greater optimism about its future. And participating businesses managed to find a new pool of repeat customers.

Macon money was a fiscal intervention (an injection of funds into the locality) accomplished using the device of privately issued money convertible at par. There was a temporary increase in the local money supply which was extinguished when businesses redeemed their notes. An interesting thought experiment is to imagine what would have happened if, instead of being convertible at par, businesses could only convert Macon Money into currency at a small discount.

Businesses that accept credit card payments are exactly in this situation, facing a haircut of 1-3 percent when they convert credit card payments into cash. Most businesses that participated in the original experiment would therefore likely continue to participate in the modified one. After all, businesses involved in Groupon campaigns accept a 75% haircut once Groupon takes its share of the discounted price.

But there is one critically important difference between Macon Money and a credit card payment: the former is negotiable while the latter is not. That is, instead of being redeemed at a small discount, Macon Money could be spent at par. If enough businesses were participating, it would make sense for each one to spend rather than redeem its receipts. The privately issued money would therefore remain in circulation.

What about a business that had no interest in spending its receipts on locally provided goods and services? Even in this case, there would be better alternatives to redeeming at a discount. For instance, if the discount were 3%, there would be room for the emergence of a local intermediary who offered cash at a more attractive 2% discount to the business, and then sold Macon Money at 1% below par to those who did wish to spend locally. Again, the privately issued money would remain in circulation.

As a result, the local money supply would have grown not just for a brief period, but indefinitely. The discount itself would allow for more money to be injected for any given amount of backing funds. And as long as convertibility was never in doubt, substantially more money could be issued than the funds earmarked to back it.

This simple thought experiment tells us something about policy. Macon Money provided an injection of liquidity that improved the balance sheets of those who managed to secure bonds. This allowed for an increase in aggregate expenditure, and given the slack in local productive capacity, also an increase in production.

It was expansionary monetary policy, but quite different from the kind of policy pursued by the Federal Reserve. The Fed expands the money supply by buying securities, which leads to a change in the composition of the asset side of individual balance sheets. Higher asset prices (and correspondingly lower interest rates) are supposed to stimulate demand through increased borrowing at more attractive rates. But in a balance sheet recession, distressed borrowers are unwilling to take on more debt and the stimulative effects of such a policy are accordingly muted. This is why calls for an alternative approach to monetary policy make analytical sense.

Furthermore, the fact that Macon Money was accepted only locally meant that it could not be used for imports from other locations. The monetary stimulus was therefore not subject to the kinds of demand leakages that would arise from the issue of generalized fiat money.

Finally, the project provided a very clear illustration of the difficulty of abolishing privately issued money. Unless one were to prevent all creditworthy institutions from issuing convertible liabilities, it would be virtually impossible to halt the use of such liabilities as media of exchange. Put differently, we are always going to have a shadow banking system. But what the Macon Money initiative shows is that the creative and judicious use of private money, backed by creditworthy foundations, can revitalize communities currently operating well below their productive potential. Whether this can be done in a scalable way, with some government involvement and oversight to prevent abuse, remains unclear. But surely the idea deserves a closer look?

---

Update. Another important feature of Macon Money is the fact that it cannot be used to pay down debt unless the creditors are themselves local. This means that even highly indebted households will either spend it, or pay down debt by selling their notes to someone who will. If increasing economic activity is the goal, this is vastly superior to disbursements of cash.

Joseph Cotterill asks (rhetorically) whether Macon Money is the anti-Bitcoin. Exactly right, and very well put.

---

Ashwin Parameswaran has sent in the following via email (posted with permission):
Just read your post on Macon money - fascinating experiment and your thought experiment on how it could stay in circulation was equally interesting.  
On the thought experiment, there's also a possibility that if Macon money can only be converted into currency at a discount then Macon money itself would be valued at a discount. Coming back to your example on credit cards, this often happens in countries where retailers can get away with it. Lots of small retailers in India offer cash discounts even when they give you a receipt i.e. its not just a tax dodge. Many retailers in the UK simply don't accept Amex cards because of the size of the haircut they impose.  
On the broader subject of imagining various types of money, this is probably closest to private banking money whereas Bitcoin is by design closest to gold. Another experiment is the idea of pure local credit money without even the intermediation of a private bank-like entity which seems to be the idea behind Ripple although the current implementation seems to be a little different. Over the last year I've done a lot of reading on 14th-17th century English history of credit/money and its almost universally accepted that most of the local money worked largely with such peer-to-peer credit systems with gold perennially being in short supply. The section of this post titled 'Interest-Bearing Money: Debt as Money' summarises some of my reading. The first half of Carl Wennerlind’s book ‘Casualties of Credit’ is excellent and has some great references in this area. 
You could see the entire arc of the last 400 years as an exercise in making these private webs of credit more stable. So peer-to-peer credit became private banking. Then comes the lender of last resort and fiat money so that the LOLR is not constrained. At the same time we make the collateral safer and safer - govt bonds during the English Financial revolution, now MBS, bank debt etc. The irony is that now banks finance everything except what they started out financing which is SME bills of exchange/invoices. Partly the reasons are regulatory but fundamentally the risk is too idiosyncratic and "micro" in an environment where macro risks are backstopped. 
In fact here in the UK there's a lot of non-bank and even peer to peer interest in some of these spaces. See this one for invoice financing (the interest is partly because peer-to-peer lending in the UK has almost no regulatory burden, not regulated by the FSA at all). In a way this is just a modern-day reconstruction of the same system that existed in 16th-17th century England - peer-to-peer webs of credit. But with the critical difference that the system is not as elastic and doesn't really need to be. There are enough individuals, insurers etc who are more than capable of taking on the real risk and giving up their own purchasing power in the interim period for an adequate return. 
Lots to think about here. Briefly, on the issue of Macon Money being valued at a discount, this seems unlikely to me except in a secondary market for conversion into cash. Unlike credit card receipts, Macon Money is negotiable, and as long as it can be converted into goods and services at par it will be valued at par by those who plan to spend it. Of course there may be an equilibrium in which vendors themselves only accept it at a discount, which then becomes a self-sustaining practice. This would be equivalent to a selective increase in price, possible only if there is insufficient competition.

Here's more from Ashwin:
Another tangential point on the peer-to-peer credit networks in 16th century England was that although they had the downside of being perpetually fragile (there are accounts of middle-class traders feeling permanently insecure because they were always entrenched in long webs of credit), this credit money could not be hoarded by anyone. In this sense it really was the anti-gold/bitcoin. I wonder what you would need to do to Macon Money to protect against the potential leakage of being just hoarded as a store of value. This is of course what people like Silvio Gesell were concerned with (there are some excellent comments by anonymous commenter 'K' on this Nick Rowe post on the paradox of hoarding). I think there's merit to a modern money that could be a medium of exchange but could not serve as a store of wealth. I often think about what such a money could look like but at the end of the day we really need experiments and trials to figure out what could work. 
Agreed.

Saturday, April 06, 2013

Haircuts on Intrade

When Intrade halted trading abruptly on March 10, my initial reaction was that the company had commingled member funds with its own, MF Global style, in violation of its Trust and Security Statement. I suspected that these funds were then dissipated (or embezzled), leaving the firm unable to honor requests for redemption.

The latest announcement from the company confirms that something along these lines did, in fact, occur:
We have now concluded the initial stages of our investigations about the financial status of the Company, and it appears that the Company is in a cash “shortfall” position of approximately US $700,000 when comparing all cash on hand in Company and Member bank accounts with Member account balances on the Exchange system.
A shortfall of this kind could not have emerged if member funds had been kept separate from company funds. As it stands, the exchange is technically insolvent and faces imminent liquidation.

But the company is looking for a way to "rectify this cash shortfall position" in hopes of resuming operations and returning to viability. It has requested members with large accounts to formally agree to allow the exchange to hold on to some portion of their funds indefinitely:
The Company has now contacted all members with account balances greater than $1000, and proposed a “forbearance” arrangement between these members and the Company, which if sufficient members agree, would allow the Company to remain solvent... 
By Tuesday, April 16, 2013, we expect to be able to inform our members if sufficient forbearance has been achieved. If so, we will then resume limited operations of the Company and we will be able to process requests for withdrawals as agreed. If sufficient forbearance has not been achieved, it seems extremely likely that the Company will be forced into liquidation.
So traders find themselves in a strategic situation similar to that faced by holders of Greek sovereign debt a couple of years ago. If enough members accept the proposed haircut, then the remaining members (who do not accept) will be able to withdraw their funds. The company might then be able to resume operations and eventually allow unrestricted withdrawals. But if enough forbearance is not forthcoming, the company will be forced into immediate liquidation.

What should one do under such circumstances? As Jeff Ely might say, consider the equilibrium.  The best case outcome from the perspective of any one member would be immediate reimbursement in full. But this can only happen if the member in question denies the company's request, while enough other members agree to it. As long as members can't coordinate their actions, and each believes that his own choice is unlikely to be decisive, it makes no sense for any of them to accept the haircut. Liquidation under these circumstances seems inevitable.

On the other hand, what choice do members really have? Although their funds are senior to all other claims on the firm's assets, the cash shortfall will prevent such claims from being honored in full. And since members are scattered across multiple jurisdictions and lack the power to coordinate their response, even partial recovery through litigation seems improbable. Facing little or no prospect of getting anything back anytime soon, some might choose to roll the dice one last time.

The obvious lesson in all this is that in the absence of vigorous oversight, "trust and security" statements can't really be trusted to provide security. 

Sunday, March 24, 2013

Albert Hirschman and the Happiness of Pursuit

The following is the text of my remarks at a gathering in memory of Albert Hirschman, held earlier today at the Institute for Advanced Study. The event included moving recollections from members of his family, as well as tributes by Joan Scott, Jeremy Adelman, Michael Walzer, Amartya Sen, Annie Cot, Wolf Lepenies, William Sewell, James Wolfensohn, and Robbert Dijkgraaf. Fernando Henrique Cardoso could not attend but sent written remarks that were read out by Adelman. I'll update this post with links to the text or video of other speeches should any become available.

---

It’s an enormous privilege to have been invited to speak at this event in memory of Albert Hirschman. Unlike most of the other speakers here, I knew Albert only from a distance, based largely on his books and interviews. I met him in person just once, though I was fortunate enough to get to know Sarah a little during my year at the Institute.

Since my connection to Albert was largely through his writing, I’d like to speak about his love of language, his gift for expression, and his approach to the written word. To Albert, words were not merely vehicles for the transmission of ideas—they were objects to be played with and molded into structures in which one could perpetually take delight.

In 1993 Albert gave an interview to a group of Italian writers, which he later translated into English and published under the title Crossing Boundaries. I’d like to quote a segment of that interview that sums up very nicely both his playful relationship with language and the great originality of his ideas. This is what he said:
I enjoy playing with words, inventing new expressions. I believe there is much more wisdom in words than we normally assume.... Here is an example.  
One of my recent antagonists, Mancur Olson, uses the expression "logic of collective action" in order to demonstrate the illogic of collective action, that is, the virtual unlikelihood that collective action can ever happen. At some point I was thinking about the fundamental rights enumerated in the Declaration of Independence and that beautiful expression of American freedom as "the right to life, liberty, and the pursuit of happiness."I noted how, in addition to the pursuit of happiness, one might also underline the importance of the happiness of pursuit, which is precisely the felicity of taking part in collective action. I simply was happy when that play on words occurred to me.
This idea of the happiness of pursuit, the pleasure that one takes in collective action, was to be a central theme in his masterpiece Exit, Voice and Loyalty, published in 1970. Two centuries earlier, Adam Smith had spoken of our propensity to "truck, barter, and exchange one thing for another." Albert Hirschman spoke instead of a propensity to protest, complain, and generally "kick up a fuss." This articulation of discontent he called Voice.

Albert believed that voice was an important factor in arresting and reversing decline in firms, organizations, and states. Economists to that point had focused on a very different mechanism, namely desertion or exit, and had argued that greater competition, in the form of greater ease of exit, was a beneficial force in maintaining high levels of organizational performance.

Albert pointed out that there was a trade-off between exit and voice; that greater ease of exit could result in a stifling of voice as the individuals most inclined to protest and complain chose to depart instead. He also observed that loyalty, provided that it was not completely blind and uncritical, could serve to delay exit and thus create the space for voice to do its work.

What Albert did in Exit, Voice and Loyalty was nothing less than to reunite two disciplines, economics and political science, which had once been closely entwined but had drifted far apart over time. And he did this not by exporting the methods of economics to the analysis of politics, as others had done, but by emphasizing the importance of political activity within the economic sphere.

This kind of interdisciplinarity permeated all of Albert’s work. He described the idea of trespassing as "basic to his thinking." Crossing boundaries came naturally to him; he was too restless and playful to be confined to a single discipline. He was also an intellectual rebel, eager to question conventional wisdom whenever he found it wanting. In fact, he did so even when the conventional wisdom had been established by his own prior work. He referred to this as a propensity to self-subversion, which he called a "permanent trait of his intellectual personality."

I recall vividly and fondly my very first contact with Albert’s work. I had just begun graduate school, having never previously studied economics, and found myself in a course on the History of Economic Thought with the legendary Robert Heilbroner. It was Heilbroner’s book The Worldly Philosophers that had steered me to economics in the first place. And there on his syllabus, alongside Smith and Ricardo and Malthus, was Albert’s book The Passions and the Interests.

I recently went back and read this extraordinary book for a second time. The twentieth anniversary edition has a foreword by Amartya Sen, who considers it to be "among the finest" of Albert’s writings. Albert himself, in the preface to this edition, notes that it’s the one book that never fell victim to his propensity to self-subversion.

There’s a memorable passage in the book where Albert discusses Adam Smith’s claim that "order and good government" came to England as the unintended consequence of a growing taste for manufactured luxuries among the feudal elite. They "bartered their whole power and authority," says Smith, for the "gratification of… vanities… for trinkets and baubles, better fit to be the playthings of children than the serious pursuits of man." Having squandered their wealth in this manner, they could no longer support their vast armies of retainers, and became incapable of "disturbing the peace" or "interrupting the regular execution of justice."

But Albert was skeptical that the feudal lords had been quite so blind to their long-term interests. He felt that Smith, always eager to uncover the unintended effects of human action, had overreached this time. And he expressed this thought as follows:
One cannot help feeling that in this particular instance, Smith overplayed his Invisible Hand.
I can just imagine the smile that spread across Albert’s face when he came up with that turn of phrase.

Albert’s work was expansive and visionary, bold and audacious, breathtakingly original and creative. But most of all, it was playful and gently irreverent. He demonstrated to us, by his own example, the happiness of intellectual pursuit. For that, more than anything else, I’ll always be grateful.

Monday, March 11, 2013

A Prediction Market Mystery

The peer-to-peer prediction market Intrade ceased operations yesterday and closed out all open positions without notice. Visitors to the site were greeted with the following mysterious message (emphasis added):
With sincere regret we must inform you that due to circumstances recently discovered we must immediately cease trading activity on www.intrade.com. 
These circumstances require immediate further investigation, and may include financial irregularities which in accordance with Irish law oblige the directors to take the following actions:
  • Cease exchange trading on the website immediately.
  • Settle all open positions and calculate the settled account value of all Member accounts immediately.
  • Cease all banking transactions for all existing Company accounts immediately.
During the upcoming weeks, we will investigate these circumstances further and determine the necessary course of action. 
To mitigate any further risk to members’ accounts, we have closed and settled all open contracts at fair market value as of the close of business on March 10, 2013, in accordance with the Terms and Conditions of our customers’ use of the website. You may view your account details and settled account balances by logging into the website. 
At this time and until further notice, it is not possible to make any payments to members in accordance with their settled account balance until the investigations have concluded.
Translation: all open contracts have been closed out at current prices, account balances now reflect only cash positions, and no withdrawals can be made until further notice. Not a penny will be paid out to any member for the time being, no matter how large their cash balance may be.

What on earth is going on? My best guess is that the margin posted by traders was not held, as it should be, in segregated accounts separate from company funds. When bets are made on this market, both parties must post margin equal to their worst-case loss, so that neither is subject to counterparty risk. In effect, each party is taking a position against the exchange, but these positions are exactly offsetting so the exchange bears no risk. To ensure that all promised payments can be made, these funds must be held in the form of cash, insured deposits, or safe dollar-denominated securities such as Treasury bills. They cannot be invested in risky assets, and cannot be used for the payment of salaries or expenses.

All this was made clear in the exchange's so-called Trust and Security Statement:
Segregated Funds: Your funds are held in segregated accounts with banks in Ireland, and are segregated from Intrade's own corporate funds. 
Safer by Design: If the Dow Jones crashes, the New York Stock Exchange doesn't go bankrupt. In the same way, intrade doesn't lose money when an unusual result arises. Whenever you trade, intrade will 'freeze' sufficient money in your account to cover your potential losses. If you lose, we simply transfer the already frozen money from your account to a winning customer account. If you win, we pay your winnings from a losing customer account.
While this design is safe in theory, there was no mechanism in place to ensure that these commitments would, in fact, be met. When Intrade closed its doors to US residents in November, it did so in response to an action by the CFTC. I wondered at the time whether there was regulatory concern about the segregation of funds:
Even though the exchange claims to keep this margin in segregated accounts, separate from company funds, there is always the possibility that its deposits are not fully insured and could be lost if the Irish banking system were to collapse. These losses would ultimately be incurred by traders, who would then have very limited legal recourse.
Similar concerns were raised in an exchange with Dave Pennock on twitter. I thought at the time that the biggest risk came from failures in the Irish banking system, and discounted the possibility that trader margin could be deliberately co-mingled with company funds, invested in risky securities, or simply embezzled. This may have been too optimistic a view, especially given the precedent of MF Global.

If some funds have been diverted or lost, then traders face the prospect of receiving less than par on their cash balances when withdrawals eventually resume. And even if they do not suffer eventual losses, the fact that their funds are frozen for an extended period itself imposes an opportunity cost. If there's a lesson in all this, it is that markets cannot exist without trust, and trust cannot be sustained indefinitely without some sort of oversight and regulation. Reputational effects alone are simply not enough.

Friday, March 01, 2013

Why Do Groupon Campaigns Damage Yelp Ratings?

One of the many benefits of visiting Microsoft Research this semester is that I get to attend some interesting talks by computer scientists working with social and economic data. One in particular this week turned out to be extremely topical. The paper was on "The Groupon Effect on Yelp Ratings" and it was presented by Giorgios Servas Zervas.

The starting point of the analysis was this: the Yelp ratings of businesses who launch Groupon campaigns suffer a sharp and immediate decline which recovers only gradually over time, with peristent effects lasting for well over a year. The following chart sums it up:


The trend line is a 30 day moving average, but re-initialized on the launch date (so the first few points after this date average just a few observations). There is a second sharp decline after about 180 days, as the coupons are about to expire. The chart also shows the volume of ratings, which surges after the launch date. Part of the surge is driven by raters who explicitly reference Groupon (the darker volume bars). But not all Groupon users identify as such in their reviews, and about half the increase in ratings volume comes from ratings that do not reference Groupon.

As is typical of computer scientists working with social data, the number of total observations is enormous. Almost 17,000 daily deals from over 5,000 businesses in 20 cities over a six month period are included, along with review histories for these businesses both during and prior to the observational window. In addition, the entire review histories of those who rated any of these businesses during the observational window were collected, expanding the set of reviews to over 7 million, and covering almost a million distinct businesses in all.

So what accounts for the damage inflicted on Yelp ratings by Groupon campaigns? The authors explore several hypotheses. Groupon users could be especially harsh reviewers regardless of whether or not they are rating a Groupon business. Businesses may be overwhelmed by the rise in demand, resulting in a decline in quality for all customers. The service provided to Groupon users may be worse than that provided to customers paying full price. Customer preferences may be poorly matched to businesses they frequent using Groupons. Or the ratings prior to the campaign may be artificially inflated by fake positive reviews, which get swamped by more authentic reviews after the campaign. All of these seem plausible and consistent with anecdotal evidence.

One hypothesis that is rejected quite decisively by the data is that Groupon users tend to be harsh reviewers in general. To address this, the authors looked at the review histories of those who identified Groupon use for the businesses in the observational window. Most of these prior reviews do not involve Groupon use, which allows for a direct test of the hypothesis that these raters were harsh in general. It turns out that they were not. Groupon users tend to write detailed and informative reviews that are more likely to be considered valuable, cool and funny by their peers. But they do not rate businesses without Groupon campaigns more harshly than other reviewers.

What about the hypothesis of businesses being overwhelmed by the rise in demand? Since only about half the surge in reviews comes from those who explicitly reference Groupon, the remaining ratings pool together non-Groupon customers with those who don't reveal Groupon use. This makes a decline in ratings by the latter group hard to interpret. John Langford (who was in the audience) noted that if the entire surge in reviews could be attributed to Groupon users, and if undeclared and declared users had the same ratings on average, then one could infer the effect of the campaign on the ratings of regular customers. This seems worth pursuing.

Anecdotal evidence on discriminatory treatment of customers paying discounted prices is plentiful (the authors mention the notorious FTD flowers case for instance). If mistreatment of coupon-carrying customers by a few bad apples were bringing down the ratings average, then a campaign should result in a more negatively skewed distribution of ratings relative to the pre-launch baseline. The authors look for this shift in skewness and find some evidence for it, but the effect is not large enough to account for the entire drop in the average rating.

To test the hypothesis that ratings prior to a campaign are artificially inflated by fake or purchased reviews, the authors look at the rate at which reviews by self-identified Groupon users are filtered, compared with the corresponding rate for reviews that make no mention of Groupon. (Yelp allows filtered reviews to be seen, though they are harder to access and are not used in the computation of ratings). Reviews referencing Groupon are filtered much less often, suggesting that they are more likely to be authentic. If Yelp's filtering algorithm is lenient enough to let a number of fake reviews through, then the post-campaign ratings will be not just more numerous but also more authentic and less glowing.

Finally, consider the possibility of a mismatch between the preferences of Groupon users and the businesses whose offers they accept. To look for evidence of this, the authors consider the extent to which reviews associated with Groupon use reveal experimentation on the part of the consumer. This is done by comparing the business category and location to the categories and locations in the reviewer's history. Experimentation is said to occur when the business category or zipcode differs from any in the reviewer's history. The data provide strong support for the hypothesis that individuals are much more likely to be experimenting in this sense when using a Groupon than when not. And such experimentation could plausibly lead to a greater incidence of disappointment.

This point deserves further elaboration. Even without experimentation on categories or locations, an individual who accepts a daily deal has a lower anticipated valuation for the product or service than someone who pays full price. Even if the expectations of both types of buyers are met, and each feels that they have gotten what they paid for, there will be differences in the ratings they assign. To take an extreme case, if the product were available for free, many buyers would emerge who consider the product essentially worthless, and would rate it accordingly even if their expectations are met.

There may be a lesson here for companies contemplating Groupon campaigns. Perhaps the Yelp rating would suffer less damage if the discount itself were not as steep. At present there is very little variation in discounts, which are mostly clustered around 50%. So there's no way to check whether smaller discounts actually result in better ratings relative to larger discounts. But it certainly seems worth exploring, at least for businesses that depend on strong ratings to thrive.

The Groupon strategy of prioritizing growth above earnings had been criticized on the grounds that there are few barriers to entry in this industry, and no network externalities that can protect an incumbent from competition. But if the link between campaigns and ratings can't be broken, there may be deeper problems with the business model than a change of leadership or strategy can solve.

Tuesday, February 19, 2013

A Combinatorial Prediction Market

I'm on leave this semester, visiting Microsoft Research's recently launched New York lab. It's a lively and stimulating place and there are a number of interesting projects underway. In this post I'd like to report on one of these, a prediction market developed by a team composed of David Pennock, David Rothschild, Miroslav Dudik, Jennifer Vaughan, and Sébastien Lahaie.

This market is very different from peer-to-peer real money prediction markets such as IEM or Intrade, in that individual participants take positions not against each other but against an algorithmic market maker that adjusts prices in response to orders placed. Furthermore, a broad and complex range of events are priced, orders of arbitrary size can be met, and consistency across prices is maintained by the immediate identification and exploitation of arbitrage opportunities.

The market for Oscar predictions is now live, and it's easy to participate. You can log in with a google account (or facebook or twitter) or create a new PredictWise account. You'll be credited with 1000 points which may be used to buy a range of contracts. These include contracts on simple events, such as "Lincoln to win Best Picture." But they also include events that reference multiple categories: you can bet on the event "Argo to win Best Picture and Daniel Day-Lewis to win Best Actor in a Leading Role," or "Zero Dark Thirty to win between 3 and 5 awards" for example.

All of these contracts are priced but the price is sensitive to order size. For small orders one can buy at the currently posted odds.  For instance, for "Lincoln to win Best Picture," current odds are 10.4%, so an expenditure of 0.104 units will return 1 unit if the event occurs:


But placing a larger order, say for 1.04 units, returns less than 10:


The functional relationship between the price and quantity vectors is deterministic and satisfies three conditions: (i) the purchase of a contract (or portfolio) raises its price smoothly, (ii) this happens in such a manner as to bound the maximum possible loss incurred by the market maker, no matter how large the order size, (iii) contracts that are obvious complements, such as "Lincoln to win Best Picture" and "Lincoln not to win Best Picture" have prices that sum to 1.

What makes this market interesting is that the algorithm ensures consistency in prices across linked contracts, quickly exploiting and eliminating any arbitrage opportunities than might arise. Some of these arbitrage conditions are not immediately transparent, as the following simplified example reveals.

Suppose that there were only two Oscar categories (Best Picture and Best Director) and consider the following seven events:
  1. Lincoln to win Best Picture
  2. Lincoln not to win Best Picture
  3. Lincoln to win Best Director
  4. Lincoln not to win Best Director
  5. Lincoln to win 0 Oscars
  6. Lincoln to win 1 Oscar
  7. Lincoln to win 2 Oscars
Let pi denote the price of contract i, where i = 1,...,7, where each contract pays out one dollar if the event in question occurs. Clearly we must have

p1 p2 =  p3 p4 = 1, 

otherwise there would be an arbitrage opportunity. Similarly, we must have

p5 p6 + p7 = 1. 

Price adjustments in response to orders are such that these equalities are continuously maintained. Somewhat less obviously, we must also have

p1 p3 =  p6 + 2p7.

If this condition were violated, then one could construct a portfolio that guaranteed a positive profit no matter what the eventual outcome may be. To see this, suppose that prices were such that

p1 p3 > p6 + 2p7.

In this case, the following portfolio would yield a risk-free profit: buy one unit each of contracts 2, 4, and 6, and two units of contract 7. This would cost

(1 - p1) + (1 - p3) +  p6 + 2p7  < 2.

The payoff from this portfolio would be exactly 2, no matter how things turn out. If Lincoln wins no Oscars then contracts 2 and 4 each pay out one unit, if it wins one Oscar then contract 6 pays out a unit, in addition to either contract 2 or contract 4, and if it wins two Oscars then each of the two units of contract 7 pays out.

In reality, there are more than two categories for which Lincoln has been nominated, and the arbitrage conditions are accordingly more complex. The point is that whenever trades occur that cause these conditions to be violated, the algorithm itself begins to execute additional trades that exploit the opportunity, shifting prices in such a manner as to restore parity. In peer-to-peer markets this activity is left to the participants themselves; trader developed algorithms have been in widespread use on Intrade for instance.

One problem with peer-to-peer markets is that only a few contracts can have significant liquidity, and complex combinations of events will therefore not be transparently and consistently priced. But the design described here allows for the consistent valuation of any combination of events, at the cost of subjecting the market maker to potential loss. The pricing function is designed to place a bound on this loss, but it cannot be avoided entirely because market participants have access to information that the market maker lacks.

Could this be a template for markets on compound events in the future? It certainly seems possible, if the internally generated arbitrage profits are large enough to compensate for the information disadvantage faced by the market maker. But at the moment this is a research initiative, focused on evaluating the effectiveness of the mechanism for aggregating distributed information. This goal is best served by broad participation and better data, so if you have a few minutes to spare before the Oscar winners are announced on Sunday, why not log in and place a few (hypothetical) bets?

Tuesday, December 11, 2012

Remembering Albert Hirschman

Albert Hirschman, among the greatest of social scientists, has died. He was truly one of a kind: always trespassing, relentlessly self-subversive, and never constrained by disciplinary boundaries.

Hirschman's life was as extraordinary as his work. Born in Berlin in 1915, he was educated in French and German. He would later gain fluency in Italian, then Spanish and English. He fled Berlin for Paris in 1933, and joined the French resistance in 1939. Fearful of being shot as a traitor by advancing German forces, he took on a new identity as a Frenchman, Albert Hermant. In 1941 he migrated to the United States, met and married Sarah Hirschman, joined the US Army, and soon found himself back in Europe as part of the war effort. After the end of hostilities he was involved in the development of the Marshall Plan, and subsequently spent four years in Bogotá where many of his ideas on economic development took shape. He and Sarah were married for more than seven decades; she died in January of this year.

Not only did Hirschman write several brilliant books in what was his fourth or fifth language, he also entertained himself with the invention of palindromes. Many of these were collected together in a book, Senile Lines by Dr. Awkward, which he presented to his daughter Katya. Forms of expression mattered to him as much as the ideas themselves. In opposition to Mancur Olson, he believed that collective action was an activity that came naturally to us humans, and was thrilled to find that one could invert a phrase in the declaration of independence to express this inclination as "the happiness of pursuit."

Hirschman's intellectual contributions were varied and many but the jewel in the crown is his masterpiece Exit, Voice and Loyalty. In this one slim volume, he managed to overturn conventional wisdom on one issue after another, and chart several new directions for research. The book is concerned with the mechanisms that can arrest and reverse declines in the performance of firms, organizations, and states. It was the interplay of two such mechanisms - desertion and articulation, or exit and voice - which Hirschman considered to be of central importance.

Exit, for instance through the departure of customers or employees or citizens in favor of a rival, can alert an organization to its own decline and set in motion corrective measures. But so can voice, or the articulation of discontent. Too rapid a rate of exit can undermine voice and result in organizational collapse instead of recovery. But a complete inability to exit can make voice futile, and poor performance can continue indefinitely.

Poorly functioning organizations prefer that an exit option be available to their most strident critics, so that they are left with less demanding customers or members or citizens. Hence a moderate amount of exit can result in the worst of all worlds, "an oppression of the weak by the incompetent and an exploitation of the poor by the lazy which is the more durable and stifling as it is both unambitious and escapable." Near-monopolies with exit options for the most severely discontented can therefore function more poorly than complete monopolies. It is not surprising that many dysfunctional states welcome the voluntary exile of their fiercest internal critics.

The propensity to exit is itself determined by the extent of loyalty to a firm or state. Loyalty slows down the rate of exit and can allow an organization time to recover from lapses in performance. But blind loyalty, which stifles voice even as it prevents exit, can allow poor performance to persist. It is in the interest of organizations to promote loyalty and raise the "price of exit", but the short term gains from doing so can lead to eventual collapse as both mechanisms for recuperation are weakened.

Among Hirschman's many targets were the Downsian model of political competition and the Median Voter Theorem. Since he considered collective action to be an expression of voice, readily adopted in response to dissatisfaction, there was no such thing as a "captive voter." Those on the fringes of a political party could not be taken for granted simply because they had no exit option: the inability to exit  just strengthened their inclination to exercise voice. This they would do with relish, driving parties away from the median voter, as political leaders trade-off the fear of exit by moderates against the threat of voice by extremists.

Albert Hirschman lived a long and eventful life and was a joyfully iconoclastic thinker. His books will be read by generations to come. But he will always remain something of an outsider in the profession; his ideas are just too broad and interdisciplinary to find neat expression in models and textbooks. He was an intellectual rebel throughout his life, and it is only fitting that he remain so in perpetuity. 

Friday, December 07, 2012

Risk and Reward in High Frequency Trading

paper on the profitability of high frequency traders has been attracting a fair amount of media attention lately. Among the authors is Andrei Kirilenko of the CFTC, whose earlier study of the flash crash used similar data and methods to illuminate the ecology of trading strategies in the S&P 500 E-mini futures market. While the earlier work examined transaction level data for four days in May 2010, the present study looks at the entire month of August 2010. Some of the new findings are startling, but need to be interpreted with greater care than is taken in the paper.

High frequency traders are characterized by large volume, short holding periods, and limited overnight and intraday directional exposure:
For each day there are three categories a potential trader must satisfy to be considered a HFT: (1) Trade more than 10,000 contracts; (2) have an end-of-day inventory position of no more than 2% of the total contracts the firm traded that day; (3) have a maximum variation in inventory scaled by total contracts traded of less than 15%. A firm must meet all three criteria on a given day to be considered engaging in HFT for that day. Furthermore, to be labeled an HFT firm for the purposes of this study, a firm must be labeled as engaging in HFT activity in at least 50% of the days it trades and must trade at least 50% of possible trading days. 
Of more than 30,000 accounts in the data, only 31 fit this description. But these firms dominate the market, accounting for 47% of total trading volume and appearing on one or both sides of almost 75% of traded contracts. And they do this with minimal directional exposure: average intraday inventory amounts to just 2% of trading volume, and the overnight inventory of the median HFT firm is precisely zero.

This small set of firms is then further subdivided into categories based on the extent to which they are providers of liquidity. For any given trade, the liquidity taker is the firm that initiates the transaction, by submitting an order that is marketable against one that is resting in the order book. The counterparty to the trade (who previously submitted the resting limit order) is the liquidity provider. Based on this criterion, the authors partition the set of high frequency traders into three subcategories: aggressive, mixed, and passive:
To be considered an Aggressive HFT, a firm must... initiate at least 40% of the trades it enters into, and must do so for at least 50% of the trading days in which it is active. To be considered a Passive HFT a firm must initiate fewer than 20% of the trades it enters into, and must do so for at least 50% of the trading days during which it is active. Those HFTs that meet neither... definition are labeled as Mixed HFTs. There are 10 Aggressive, 11 Mixed, and 10 Passive HFTs.
This heterogeneity among high frequency traders conflicts with the common claim that such firms are generally net providers of liquidity. In fact, the authors find that "some HFTs are almost 100% liquidity takers, and these firms trade the most and are the most profitable."

Given the richness of their data, the authors are able to compute profitability, risk-exposure, and measures of risk-adjusted performance for all firms. Gross profits are significant on average but show considerable variability across firms and over time. The average HFT makes over $46,000 a day; aggressive firms make more than twice this amount. The standard deviation of profits is five times the mean, and the authors find that "there are a number of trader-days in which they lose money... several HFTs even lose over a million dollars in a single day."

Despite the volatility in daily profits, the risk-adjusted performance of high frequency traders is found to be spectacular:
HFTs earn above-average gross rates of return for the amount of risk they take. This is true overall and for each type... Overall, the average annualized Sharpe ratio for an HFT is 9.2. Among the subcategories, Aggressive HFTs (8.46) exhibit the lowest risk-return tradeoff, while Passive HFTs do slightly better (8.56) and Mixed HFTs achieve the best performance (10.46)... The distribution is wide, with an inter-quartile range of 2.23 to 13.89 for all HFTs. Nonetheless, even the low end of HFT risk-adjusted performance is seven times higher than the Sharpe ratio of the S&P 500 (0.31).
These are interesting findings, but there is a serious problem with this interpretation of risk-adjusted performance. The authors are observing only a partial portfolio for each firm, and cannot therefore determine the firm's overall risk exposure. It is extremely likely that these firms are trading simultaneously in many markets, in which case their exposure to risk in one market may be amplified or offset by their exposures elsewhere. The Sharpe ratio is meaningful only when applied to a firm's entire portfolio, not to any of its individual components. For instance, it is possible to construct a low risk portfolio with a high Sharpe ratio that is composed of several high risk components, each of which has a low Sharpe ratio.

To take an extreme example, if aggressive firms are attempting to exploit arbitrage opportunities between the futures price and the spot price of a fund that tracks the index, then the authors would have significantly overestimated the firm's risk exposure by looking only at its position in the futures market. Over short intervals, such a strategy would result in losses in one market, offset and exceeded by gains in another. Within each market the firm would appear to have significant risk exposure, even while its aggregate exposure was minimal. Over longer periods, net gains will be more evenly distributed across markets, so the profitability of the strategy can be revealed by looking at just one market. But doing so would provide a very misleading picture of the firms risk exposure, since day-to-day variations in profitability within a single market can be substantial.

The problem is compounded by the fact that there are likely to by systematic differences across firms in the degree to which they are trading in other markets. I suspect that the most aggressive firms are in fact trading across multiple markets in a manner that lowers rather than amplifies their exposure in the market under study. Under such circumstances, the claim that aggressive firms "exhibit the lowest risk-return tradeoff" is without firm foundation.

Despite these problems of interpretation, the paper is extremely valuable because it provides a framework for thinking about the aggregate costs and benefits of high frequency trading. Since contracts in this market are in zero net supply, any profits accruing to one set of traders must come at the expense of others:
From whom do these profits come? In addition to HFTs, we divide the remaining universe of traders in the E-mini market into four categories of traders: Fundamental traders (likely institutional), Non-HFT Market Makers, Small traders (likely retail), and Opportunistic traders... HFTs earn most of their profits from Opportunistic traders, but also earn profits from Fundamental traders, Small traders, and Non-HFT Market Makers. Small traders in particular suffer the highest loss to HFTs on a per contract basis.
Within the class of high frequency traders is another hierarchy: mixed firms lose to aggressive ones, and passive firms lose to both of the other types.

The operational costs incurred by such firms include payments for data feeds, computer systems, co-located servers, exchange fees, and highly specialized personnel. Most of these costs do not scale up in proportion to trading volume. Since the least active firms must have positive net profitability in order to survive, the net returns of the most aggressive traders must therefore be substantial.

In thinking about the aggregate costs and benefits of all this activity, it's worth bringing to mind Bogle's law:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
The costs to other market participants of high frequency trading correspond roughly to the gross profitability of this small set of firms. What about the benefits? The two most commonly cited are price discovery and liquidity provision. It appears that the net effect on liquidity of the most aggressive traders is negative even under routine market conditions. Furthermore, even normally passive firms can become liquidity takers under stressed conditions when liquidity is most needed but in vanishing supply.

As far as price discovery is concerned, high frequency trading is based on a strategy of information extraction from market data. This can speed up the response to changes in fundamental information, and maintain price consistency across related assets. But the heavy lifting as far as price discovery is concerned is done by those who feed information to the market about the earnings potential of publicly traded companies. This kind of research cannot (yet) be done algorithmically.

A great deal of trading activity in financial markets is privately profitable but wasteful in the aggregate, since it involves a shuffling of net returns with no discernible effect on production or economic growth. Jack Hirschleifer made this point way back in 1971, when the financial sector was a fraction of its current size. James Tobin reiterated these concerns a decade or so later. David Glasner, who was fortunate enough to have studied with Hirshlefier, has recently described our predicament thus:
Our current overblown financial sector is largely built on people hunting, scrounging, doing whatever they possibly can, to obtain any scrap of useful information — useful, that is for anticipating a price movement that can be traded on. But the net value to society from all the resources expended on that feverish, obsessive, compulsive, all-consuming search for information is close to zero (not exactly zero, but close to zero), because the gains from obtaining slightly better information are mainly obtained at some other trader’s expense. There is a net gain to society from faster adjustment of prices to their equilibrium levels, and there is a gain from the increased market liquidity resulting from increased trading generated by the acquisition of new information. But those gains are second-order compared to gains that merely reflect someone else’s losses. That’s why there is clearly overinvestment — perhaps massive overinvestment — in the mad quest for information.
To this I would add the following: too great a proliferation of information extracting strategies is not only wasteful in the aggregate, it can also result in market instability. Any change in incentives that substantially lengthens holding periods and shifts the composition of trading strategies towards those that transmit rather than extract information could therefore be both stabilizing and growth enhancing. 

Wednesday, November 28, 2012

Death of a Prediction Market

A couple of days ago Intrade announced that it was closing its doors to US residents in response to "legal and regulatory pressures." American traders are required to close out their positions by December 23rd, and withdraw all remaining funds by the 31st. Liquidity has dried up and spreads have widened considerably since the announcement. There have even been sharp price movements in some markets with no significant news, reflecting a skewed geographic distribution of beliefs regarding the likelihood of certain events.

The company will survive, maybe even thrive, as it adds new contracts on sporting events to cater to it's customers in Europe and elsewhere. But the contracts that made it famous - the US election markets - will dwindle and perhaps even disappear. Even a cursory glance at the Intrade forum reveals the importance of its US customers to these markets. Individuals from all corners of the country with views spanning the ideological spectrum, and detailed knowledge of their own political subcultures, will no longer be able to participate. There will be a rebirth at some point, perhaps launched by a new entrant with regulatory approval, but for the moment there is a vacuum in a once vibrant corner of the political landscape.

The closure was precipitated by a CFTC suit alleging that the company "solicited and permitted" US persons to buy and sell commodity options without being a registered exchange, in violation of US law. But it appears that hostility to prediction markets among regulators runs deeper than that, since an attempt by Nadex to register and offer binary options contracts on political events was previously denied on the grounds that "the contracts involve gaming and are contrary to the public interest."

The CFTC did not specify why exactly such markets are contrary to the public interest, and it's worth asking what the basis for such a position might be.

I can think of two reasons, neither of which are particularly compelling in this context. First, all traders have to post margin equal to their worst-case loss, even though in the aggregate the payouts from all bets will net to zero. This means that cash is tied up as collateral to support speculative bets, when it could be put to more productive uses such as the financing of investment. This is a capital diversion effect. Second, even though the exchange claims to keep this margin in segregated accounts, separate from company funds, there is always the possibility that its deposits are not fully insured and could be lost if the Irish banking system were to collapse. These losses would ultimately be incurred by traders, who would then have very limited legal recourse.

These arguments are not without merit. But if one really wanted to restrain the diversion of capital to support speculative positions, Intrade is hardly the place to start. Vastly greater amounts of collateral are tied up in support of speculation using interest rate and currency swaps, credit derivatives, options, and futures contracts. It is true that such contracts can also be used to reduce risk exposures, but so can prediction markets. Furthermore, the volume of derivatives trading has far exceeded levels needed to accommodate hedging demands for at least a decade. Sheila Bair recently described synthetic CDOs and naked CDSs as "a game of fantasy football" with unbounded stakes. In comparison with the scale of betting in licensed exchanges and over-the-counter swaps, Intrade's capital diversion effect is truly negligible.

The second argument, concerning the segregation and safety of funds, is more relevant. Even if the exchange maintains a strict separation of company funds from posted margin despite the absence of regulatory oversight, there's always the possibility that its deposits in the Irish banking system are not fully secure. Sophisticated traders are well aware of this risk, which could be substantially mitigated (though clearly not eliminated entirely) by licensing and regulation.

In judging the wisdom of the CFTC action, it's also worth considering the benefits that prediction markets provide. Attempts at manipulation notwithstanding, it's hard to imagine a major election in the US without the prognostications of pundits and pollsters being measured against the markets. They have become part of the fabric of social interaction and conversation around political events.

But from my perspective, the primary benefit of prediction markets has been pedagogical. I've used them frequently in my financial economics course to illustrate basic concepts such as expected return, risk, skewness, margin, short sales, trading algorithms, and arbitrage. Intrade has been generous with its data, allowing public access to order books, charts and spreadsheets, and this information has found its way over the years into slides, problem sets, and exams. All of this could have been done using other sources and methods, but the canonical prediction market contract - a binary option on a visible and familiar public event - is particularly well suited for these purposes.

The first time I wrote about prediction markets on this blog was back in August 2003. Intrade didn't exist at the time but its precursor, Tradesports, was up and running, and the Iowa Electronic Markets had already been active for over a decade. Over the nine years since that early post, I've used data from prediction markets to discuss arbitrageoverreactionmanipulationself-fulfilling propheciesalgorithmic trading, and the interpretation of prices and order books. Many of these posts have been about broader issues that also arise in more economically significant markets, but can be seen with great clarity in the Intrade laboratory.

It seems to me that the energies of regulators would be better directed elsewhere, at real and significant threats to financial stability, instead of being targeted at a small scale exchange which has become culturally significant and serves an educational purpose. The CFTC action just reinforces the perception that financial sector enforcement in the United States is a random, arbitrary process and that regulators keep on missing the wood for the trees.

---

Update: NPR's Yuki Noguchi follows up with Justin Wolfers, Thomas Bell, Laurence Lau, and Jason Ruspini here; definitely worth a listen. Brad Plumer's overview of the key issues is also worth a look.

Sunday, November 18, 2012

Curtailing Intellectual Monopoly

I never thought I'd see an RSC policy brief referring to mash-ups and mix-tapes, but I was clearly mistaken.

The document deals in an unusually frank manner with the dismal state of US copyright law. Perhaps too frankly: it was quickly disavowed and taken down on the grounds that publication had occurred "without adequate review." Copies continue to circulate, of course (the link above is to one I posted on Scribd). Although lightly peppered with ideological boilerplate, the brief makes a number of timely and sensible points and is worth reading in full.

Aside from extolling the virtues of "a robust culture of DJ’s and remixing" free from the stranglehold of copyright protection, the authors of the report make the following claims. First, the purpose of copyright law, according to the constitution, is to "promote the progress of science and useful arts" and not to "compensate the creator of the content." Copyright law should therefore be evaluated by the degree to which it facilitates innovation and creative expression. Second, unlike conventional tort law, statutory damages for infringement are "vastly disproportionate from the actual damage to the copyright producer." For instance, Limewire was sued for $75 trillion, "more money than the entire music recording industry has made since Edison’s invention of the phonograph in 1877." Third, the duration of coverage has been expanding, seemingly without limit. In 1790 a 14 year term could be renewed once if the the author remained alive; current coverage is for the life of the author plus 70 years. This stifles rather than promotes creative activity.

The economists Michele Boldrin and David Levine have been making these points for years. In their book Against Intellectual Monopoly (reviewed here), they point out that the pace of innovation in industries without patent and copyright protection has historically been extremely rapid. Software could not be patented before 1981, nor financial securities prior to 1998, yet both industries witnessed innovation at a blistering pace. The fashion industry remains largely untouched by intellectual property law, yet new designs keep appearing and enriching their creators. Innovative techniques in professional sports continue to be developed, despite the fact that successful ones are quickly copied and disseminated.

In 19th century publishing, British authors had limited protection in the United States but managed to secure lucrative deals with publishers, allowing the latter to saturate the market at low prices before new entrants could gain a foothold. More recently, commercial publishers have turned a profit selling millions of copies of unprotected government documents. For instance, the 9/11 Commission Report was published by both Norton and Macmillan in 2004, and a third version by Cosimo is now available.

Copyright restrictions for scientific papers are especially illogical, since faculty authors benefit from the widest possible dissemination and citation of their work. Furthermore, in the case of journals owned by commercial publishers, copyright is typically transferred by the author to the publisher. Neither the content creators nor the uncompensated peer-reviewers who evaluate manuscripts for publication benefit from protection in such cases. Fortunately, thanks to the emergence of new high-quality open-source journals sponsored by academic societies, things are starting to change.

It's not clear why the policy brief was taken down, or what motivated it in the first place. Henry Farrell, while agreeing with the positions taken in the report, argues that damage to an industry that has historically supported Democrats may be a factor. In contrast, Jordan Bloom and Alex Tabarrok both believe that pressure on Republicans from the entertainment industry led to the brief being withdrawn. They can't all be right as far as I can see. But less interesting than the motivation for the report is its content, and the long overdue debate on patents and copyrights that could finally be stirred in its wake. 

Wednesday, November 07, 2012

Prediction Market Manipulation: A Case Study

The experience of watching election returns come in has become vastly more social and interactive than it was just a decade ago. Television broadcasts still provide the core pubic information around which expectations are formed, but blogs and twitter feeds are sources of customized private information that can have significant effects on the evolution of beliefs. And prediction markets aggregate this private information and channel it back into the public sphere.

All of this activity has an impact not only on our beliefs and moods, but also on our behavior. In particular, beliefs that one's candidate of choice has lost can affect turnout. It has been argued, for instance, that early projections of victory for Reagan in 1980 depressed Democratic turnout in California, and that Republican turnout in Florida was similarly affected in 2000 when the state was called for Gore while voting in the panhandle was still underway. For this reason, early exit poll data is kept tightly under wraps these days, and states are called for one candidate or another only after polls have closed.

This effect of beliefs on behavior implies that a candidate facing long odds of victory has an incentive to inflate these odds and project confidence in public statements, lest the demoralizing effects of pessimism cause the likelihood of victory to decline even further. Traditionally this would be done by partisans on television sketching out implausible scenarios and interpretations of the incoming data to boost their supporters. But with the increasing visibility of prediction markets, this strategy is much less effective. If a collapse in the price of a contract on Intrade reveals that a candidate is doing much worse than expected, no amount of cheap talk on television can do much to change the narrative.

Given this, the incentives to interfere with what the markets are saying becomes quite powerful. Even though trading volume has risen dramatically in prediction markets over recent years, the amount of money required to have a sustained price impact for a few hours remains quite small, especially in comparison with the vast sums now spent on advertising.

In general, I believe that observers are too quick to allege manipulation when they see unusual price movements in such markets. As I noted in an earlier post, a spike in the price of the Romney contract a few days ago was probably driven by naive traders over-reacting to rumors of a game-changing announcement by Donald Trump, rather than by any systematic attempt at price manipulation. My reasons for thinking so were based on the fact that frenzied purchases of a single contract (while ignoring complementary contracts) are terribly ineffective if the goal is to have a sustained impact on prices. If one really wants to manipulate a market, it has to be done by placing large orders that serve as price ceilings and floors, and to do this across complementary contracts in a consistent way.

As it happens, this is exactly what someone tried to do yesterday. At around 3:30 pm, I noticed that the order book for both Obama and Romney contracts on Intrade had become unusually asymmetric, with a large block of buy orders for Romney in the 28-30 range, and a corresponding block of sell orders for Obama in the 70-72 range. Here's the Romney order book:

And here's the book for Obama:


Since the exchange requires traders to post 100% margin (to cover their worst case loss and eliminate counterparty risk), the funds required to place these orders was about $240,000 in total. A non-trivial amount, but probably less than the cost of a thirty-second commercial during primetime.

Could this not have been just a big bet, placed by someone optimistic about Romney's chances? I don't think so, for two reasons. First, if one wanted to bet on Romney rather than Obama, much better odds were available elsewhere, for instance on Betfair. More importantly, one would not want to leave such large orders standing at a time when new information was emerging rapidly; the risk of having the orders met by someone with superior information would be too great. Yet these orders stood for hours, and effectively placed a floor on the Romney price and a ceiling on the price for Obama.

Meanwhile odds in other markets were shifting rapidly. Nate Silver noticed the widening disparity and was puzzled by it, arguing that differences across markets should "evaporate on Election Day itself, when the voting is over and there is little seeming benefit from affecting the news media coverage." Much as I admire Nate, I think that he was mistaken here. It is precisely on election day that market manipulation makes most sense, since one only needs to affect media coverage for a few hours until all relevant polls have closed. Voting was still ongoing in Colorado, and keeping Romney viable there was the only hope of stitching together a victory. Florida, Virginia and Ohio were all close at the time and none had been called for Obama. A loss in Colorado would have made these three states irrelevant and a Romney victory virtually impossible.

Given this interpretation, I felt that the floor would collapse once the Colorado polls closed at 9pm Eastern Time, and this is precisely what happened:


Once the floor gave way, the price fell to single digits in a matter of minutes and never recovered.

It turned out, of course, that none of this was to matter: Virginia, Ohio, and (probably) Florida have all fallen to Obama. But all were close, and the possibility of a different outcome could not have been ruled out at the time. The odds were low, and a realistic projection of these odds would have made them even lower. Such is the positive feedback loop between beliefs and outcomes in politics. Under the circumstances, the loss of a few hundred thousand dollars to keep alive the prospect of a Romney victory probably seemed like a good investment to someone.

Should one be concerned about such attempts at manipulation? I don't think so. They muddy the waters a bit but are transparent enough to be spotted quickly and reacted to. My initial post was retweeted within minutes by Justin Wolfers to 24,000 followers, and by Chris Hayes to 160,000 shortly thereafter. Attempts at manipulating beliefs are nothing new in presidential politics, it's just the methods that have changed. And as long as one is aware of the possibility of such manipulation, it is relatively easy to spot and counter. The same social media that transmits misinformation also allows for the broadcast of countervailing narratives. In the end the fog clears and reality asserts itself. Or so one hopes. 
--- 

Update: The following chart shows the Obama price breaking through the ceiling just before the polls closed in Colorado:


It's the extraordinary stability of the price before 8:45pm, which was sustained over several hours, that is suggestive of manipulation.