Friday, October 16, 2015

Threats Perceived When There Are None

Sendhil Mullainathan is one of the most thoughtful people in the economics profession, but he has a recent piece in the New York Times with which I really must take issue.

Citing data on the racial breakdown of arrests and deaths at the hands of law enforcement officers, he argues that "eliminating the biases of all police officers would do little to materially reduce the total number of African-American killings." Here's his reasoning:
According to the F.B.I.’s Supplementary Homicide Report, 31.8 percent of people shot by the police were African-American, a proportion more than two and a half times the 13.2 percent of African-Americans in the general population... But this data does not prove that biased police officers are more likely to shoot blacks in any given encounter...

Every police encounter contains a risk: The officer might be poorly trained, might act with malice or simply make a mistake, and civilians might do something that is perceived as a threat. The omnipresence of guns exaggerates all these risks.

Such risks exist for people of any race — after all, many people killed by police officers were not black. But having more encounters with police officers, even with officers entirely free of racial bias, can create a greater risk of a fatal shooting.

Arrest data lets us measure this possibility. For the entire country, 28.9 percent of arrestees were African-American. This number is not very different from the 31.8 percent of police-shooting victims who were African-Americans. If police discrimination were a big factor in the actual killings, we would have expected a larger gap between the arrest rate and the police-killing rate.

This in turn suggests that removing police racial bias will have little effect on the killing rate. 
A key assumption underlying this argument is that encounters involving genuine (as opposed to perceived) threats to officer safety arise with equal frequency across groups. To see why this is a questionable assumption, consider two types of encounters, which I will call safe and risky. A risky encounter is one in which the confronted individual poses a real threat to the officer; a safe encounter is one in which no such threat is present. But a safe encounter might well be perceived as risky, as the following example of a traffic stop for a seat belt violation in South Carolina vividly illustrates:

Sendhil is implicitly assuming that a white motorist who behaved in exactly the same manner as Levar Jones did in the above video would have been treated in precisely the same manner by the officer in question, or that the incident shown here is too rare to have an impact on the aggregate data. Neither hypothesis seems plausible to me.

How, then, can one account for the rough parity between arrest rates and the rate of shooting deaths at the hands of law enforcement? If officers frequently behave differently in encounters with black civilians, shouldn't one see a higher rate of killing per encounter? 

Not necessarily. To see why, think of the encounter involving Henry Louis Gates and Officer James Crowley back in 2009. This was a safe encounter as defined above, but may not have happened in the first place had Gates been white. If the very high incidence of encounters between police and black men is due, in part, to encounters that ought not to have occurred at all, then a disproportionate share of these will be safe, and one ought to expect fewer killings per encounter in the absence of bias. Observing parity would then be suggestive of bias, and eliminating bias would surely result in fewer killings.

In justifying the termination of the officer in the video above, the director of the South Carolina Department of Public Safety stated that he "reacted to a perceived threat where there was none."  Fear is a powerful motivator, and even when there are strong incentives not to shoot, it is still a preferable option to being shot. This is why stand-your-ground laws have resulted in an increased incidence of homicide, despite narrowing the very definition of homicide to exclude certain killings. It is also why homicide is so volatile across time and space, and why staggering racial disparities in both victimization and offending persist.

None of this should detract from the other points made in Sendhil's piece. There are indeed deep structural problems underlying the high rate of encounters, and these need urgent policy attention. But a careful reading of the data does not support the claim that "removing police racial bias will have little effect on the killing rate." On the contrary, I expect that improved screening and better training, coupled with body and dashboard cameras, will result in fewer officers reacting to a perceived threat when there is none.


Update (10/18). I had a useful exchange of emails with Sendhil yesterday. I think that we both care deeply about the issue and are interested in getting to the truth, not in scoring points. But there's no convergence in positions yet. Here's an extract of my last to him (I'm posting it because it might help clarify the argument above):
Definitely you can easily make sense of the data without bias. The question is whether this is the right inference, given what we know about the processes generating encounters.

Suppose (for the sake of argument) that whites have encounters with police only if they are engaging in some criminal activity, while blacks sometimes have encounters with police when they are completely innocent. This need not be due to police bias: it could be because bystanders are more likely to think blacks are up to no good for instance (Gates and Rice come to mind).

Suppose further that those engaging in criminal activity are threats to the police with some probability, and this is independent of offender race. The innocents are never threats to the police. But cops can't tell black innocents from black criminals, so end up killing blacks and whites at the same rate per encounter. If they could tell them apart, blacks would be killed at a lower rate per encounter. What I mean by bias is really this inability to distinguish; to see threats when none are present. 

I believe that black cops are less likely than white cops to perceive an encounter with an innocent as threatening. If a suspect looks like your cousin, or a guy you sit beside to watch football on Sundays, you are less likely to see him as a threat when he is not. That's why I asked you in Cambridge whether you had data on officer race in killings - when the victim is innocent the officer seems invariably to be white. So a first very rough test of bias would be whether innocents are killed at the same rate by black and white officers...

I've found the twitter reaction to your post a bit depressing, because better selection, training and video monitoring are really urgent needs in my opinion, and the absence-of-bias narrative can feed complacency about these. I know that was far from your intention, and you are extremely sympathetic to victims of police (and other) violence. You also have a responsibility to speak out on the issue, given your close scrutiny of the data. But I do believe that the inference you've made about the likely negligible effects of eliminating police bias are not really supported by the evidence presented. That, and the personal importance of the issue to me, compelled me to write the response.

Update (10/19).  This post by Jacob Dink is worth reading. Jacob shows that the likelihood of being shot by police conditional on being unarmed is twice as high for blacks relative to whites. The likelihood is also higher conditional on being armed, but the difference is smaller:

This, together with the fact that rates of arrest and killing are roughly equal across groups, implies that blacks are less likely to be armed than whites, conditional on an encounter. In the absence of bias, therefore, the rate of killing per encounter should be lower for blacks, not equal across groups. So we can't conclude that "removing police racial bias will have little effect on the killing rate." That was the point I was trying to make in this post. 


Update (10/21). Andrew Gelman follows up. The link above to Jacob Dink's post seems to be broken and I can't find a cached version. But there's a post by Howard Frant from earlier this year that makes a similar point.

Tuesday, September 29, 2015

The Price Impact of Margin-Linked Shorts

The real money peer-to-peer prediction market PredictIt just made a major announcement: they plan to margin-link short positions. This will lead to an across-the board decline in the prices of many contracts, especially in the two nominee markets. Given that the prices in these markets are already being referenced by the campaigns, this change could well have an impact on the race.

What margin-linking short positions does is to make it substantially cheaper to bet simultaneously against multiple candidates. Instead of a trader's worst-case loss being computed separately for each position, it is computed based on the recognition that only one candidate can eventually win. So a bet against both Bush and Rubio ought to require less cash than a bet against just one of the two, since we know that a loss on one bet implies a win on the other.

In an earlier post I argued that a failure to margin-link short positions was a design flaw that results in artificially inflated prices for all contracts in a given market, making the interpretation of these prices as probabilities untenable. The problem can be seen by looking at some of the current prices in the GOP nominee market:

The "Buy No" column tells us the price per contract of betting against a candidate for the nomination, with each contract paying out a dollar if the named individual fails to secure the nomination. One could buy five of these contracts (Rubio, Bush, Trump, Fiorina, and Carson) for a total of $3.91, and even of one of these were to win, the payoff from the bet would be $4. If, on the other hand, Cruz or Kasich were to be nominated, the bet would pay $5. There is no risk of loss involved.

Margin-linking shorts recognizes this fact, and would make this basket of five bets collectively cost nothing at all. This would be about as pure an arbitrage opportunity as one is likely to find in real money markets. Aggressive bets would be placed on all contracts simultaneously, with consequent price declines.

A useful effect of this change in design is that manipulating the market becomes much harder. Buying contracts to push up a price would be met by a wall of resistance as long as the sum of all contract prices yields an opportunity for arbitrage. To sustain manipulation would require a trader not only to put a floor on the favored contract, but a ceiling on all others. This has been done before, but would be considerably more costly than under the current market design.

I'd be interested to see which prices are affected most as the transition occurs, and how much prices move in anticipation of the change. But no matter how the aggregate decline is distributed across contracts, this example illustrates one important fact about financial markets in general: prices depend not just on beliefs about the likelihood of future events, but also on detailed features of market design. Too uncritical an acceptance of the efficient markets hypothesis can lead us to overlook this somewhat obvious but quite important point.

Wednesday, April 22, 2015

Spoofing in an Algorithmic Ecosystem

A London trader recently charged with price manipulation appears to have been using a strategy designed to trigger high-frequency trading algorithms. Whether he used an algorithm himself is beside the point: he made money because the market is dominated by computer programs responding rapidly to incoming market data, and he understood the basic logic of their structure.

Specifically, Navinder Singh Sarao is accused of having posted large sell orders that created the impression of substantial fundamental supply in the S&P E-mini futures contract:
The authorities said he used a variety of trading techniques designed to push prices sharply in one direction and then profit from other investors following the pattern or exiting the market.

The DoJ said by allegedly placing multiple, simultaneous, large-volume sell orders at different price points — a technique known as “layering”— Mr Sarao created the appearance of substantial supply in the market.
Layering is a type of spoofing, a strategy of entering bids or offers with the intent to cancel them before completion.
Who are these "other investors" that followed the pattern or exited the market? Surely not the fundamental buyers and sellers placing orders based on an analysis of information about the companies of which the index is composed. Such investors would not generally be sensitive to the kind of order book details that Sarao was trying to manipulate (though they may buy or sell using algorithms sensitive to trading volume in order to limit market impact). Furthermore, as Andrei Kirilenko and his co-authors found in a transaction level analysis, fundamental buyers and sellers account for a very small portion of daily volume in this contract.

As far as I can tell, the strategies that Sarao was trying to trigger were high-frequency trading programs that combine passive market making with aggressive order anticipation based on privileged access and rapid responses to incoming market data. Such strategies correspond to just one percent of accounts on this exchange, but are responsible for almost half of all trading volume and appear on one or both sides of almost three-quarters of traded contracts.

The most sophisticated algorithms would have detected Sarao's spoofing and may even have tried to profit from it, but less nimble ones would have fallen prey. In this manner he was able to syphon off a modest portion of HFT profits, amounting to about four forty million dollars over four years.

What is strange about this case is the fact that spoofing of this kind is, to quote one market observer, as common as oxygen. It is frequently used and defended against within the high frequency trading community. So why was Sarao singled out for prosecution? I suspect that it was because his was a relatively small account, using a simple and fairly transparent strategy. Larger firms that combine multiple strategies with continually evolving algorithms will not display so clear a signature. 

It's important to distinguish Sarao's strategy from the ecology within which it was able to thrive. A key feature of this ecology is the widespread use of information extracting strategies, the proliferation of which makes direct investments in the acquisition and analysis of fundamental information less profitable, and makes extreme events such as the flash crash practically inevitable.

Monday, April 06, 2015

Intermediation in a Fragmented Market

There’s a recent paper by Merritt Fox, Lawrence Glosten and Gabriel Rauterberg that anyone interested in the microstructure of contemporary asset markets would do well to read. It's one of the few papers to take a comprehensive and theoretically informed look at the welfare implications of high frequency trading, including effects on the incentives to invest in the acquisition and analysis of fundamental information, and ultimately on the allocation of capital and the distribution of risk.

Back in 1985, Glosten co-authored what has become one of the most influential papers in the theory of market microstructure. That paper considered the question of how a market maker should set bid and ask prices in a continuous double auction in the presence of potentially better informed traders. The problem facing the market maker is one of adverse selection: a better informed counterparty will trade against a quote only if doing so is profitable, which necessarily means that all such transactions impose a loss on the market maker. To compensate there must be a steady flow of orders from uninformed parties, such as investors in index funds who are accumulating or liquidating assets to manage the timing of their consumption. The competitive bid-ask spread depends, among other things, on the size of this uninformed order flow as well as the precision of the signals received by informed traders.

The Glosten-Milgrom model, together with a closely related contribution by Albert Kyle, provides the theoretical framework within which the new paper develops its arguments. This is a strength because the role of adverse selection is made crystal clear. In particular, any practice that defends a market maker against adverse selection (such as electronic front running, discussed further below) will tend to lower spreads under competitive conditions. This will benefit uninformed traders at the margin, but will hurt informed traders, reduce incentives to acquire and analyze fundamental information, and could result in lower share price accuracy.

Such trade-offs are inescapable, and the Glosten-Milgrom and Kyle models help to keep them in sharp focus. But this theoretical lens is also a limitation because the market makers in these models are passive liquidity providers who do not build directional exposure based on information gleaned from their trading activity. This may be a reasonable description of the specialists of old, but the new market makers combine passive liquidity provision with aggressive order anticipation, and respond to market data not simply by cancelling orders and closing out positions but by speculating on short term price movements. They would do so even in the absence of market fragmentation, and this has implications for price volatility and the likelihood of extreme events which I have discussed in earlier posts.

But the focus of the paper is not on volatility, but rather on market fragmentation and differential access to information. The authors argue that three controversial practices---electronic front running, slow market arbitrage, and midpoint order exploitation---can all be traced to these two features of contemporary markets, and can all be made infeasible by a simple change in policy. It's worth considering these arguments in some detail.

Electronic front running is the practice of using information acquired as a result of a trade at one venue to place or cancel orders at other venues while orders placed at earlier points in time are still in transit. The authors illustrate the practice with the following example: 
For simplicity of exposition, just one HFT, Lightning, and two exchanges, BATS Y and the NYSE, are involved. Lightning has co-location facilities at the respective locations of the BATS Y and NYSE matching engines. These co-location facilities are connected with each other by a high-speed fiber optic cable.
An actively managed institutional investor, Smartmoney, decides that Amgen’s future cash flows are going to be greater than its current price suggests. The NBO is $48.00, with 10,000 shares being offered at this price on BATS Y and 35,000 shares at this price on NYSE. Smartmoney decides to buy a substantial block of Amgen stock and sends a 10,000 share market buy order to BATS Y and a 35,000 share market buy order to NYSE. The 35,000 shares offered at $48.00 on NYSE are all from sell limit orders posted by Lightning.
The order sent to BATS Y arrives at its destination first and executes. Lightning’s colocation facility there learns of the transaction very quickly. An algorithm infers from this information that an informed trader might be looking to buy a large number of Amgen shares and thus may have sent buy orders to other exchanges as well. Because of Lightning’s ultra-high speed connection, it has the ability to send a message from its BATS Y co-location facility to its co-location facility at NYSE, which in turn has the ability to cancel Lightning’s 35,000 share $48.00 limit sell order posted on NYSE. All this can happen so fast that the cancellation would occur before the arrival there of Smartmoney’s market buy order. If Lightning does cancel in this fashion, it has engaged in “electronic front running.” 
Note that if Smartmoney had simply sent an order to buy 45,000 shares to BATS Y, of which an unfilled portion of 35,000 was routed to NYSE, the same pattern of trades and cancellations would occur. But in this alternative version of the example, orders would not be processed in the sequence in which they make first contact with the market. In particular, the cancellation order would be processed before the original buy order had been processed in full. This seems to violate the spirit if not the letter of Regulation NMS.

Furthermore, while the authors focus on order cancellation in response to the initial information, there is nothing to prevent Lightning from buying up shares on NYSE, building directional exposure, then posting offers at a slightly higher price. In fact, it cannot be optimal from the perspective of a firm with such a speed advantage to simply cancel orders in response to new information: there must arise situations in which the information is strong enough to warrant a speculative trade. In effect, the firm would mimic the behavior of an informed trader by extracting the information from market data, at a fraction of the cost of acquiring the information directly. 

Electronic front running prevents informed traders from transacting against all resting orders that are available at the time they place an order. This defends high frequency traders against adverse selection, allowing them to post smaller spreads, which benefits uninformed traders. But it also lowers the returns to investing in the acquisition and analysis of information, potentially lowering share price accuracy. Given this, the authors consider the welfare effects of electronic front running to be ambiguous.

The other two practices, however, result in unambiguously negative welfare effects. First consider slow market arbitrage, defined and illustrated by the authors as follows:
Slow market arbitrage can occur when an HFT has posted a quote representing the NBO or NBB on one exchange, and subsequently someone else posts an even better quote on a second exchange, which the HFT learns of before it is reported by the national system. If, in the short time before the national report updates, a marketable order arrives at the first exchange, the order will transact against the HFT’s now stale quote. The HFT, using its speed, can then make a riskless profit by turning around and transacting against the better quote on the second exchange…

To understand the practice in more detail, let us return to our HFT Lightning. Suppose that Lightning has a limit sell order for 1000 shares of IBM at $161.15 posted on NYSE. This quote represents the NBO at the moment. Mr. Lowprice then posts a new 1000 share sell limit order for IBM on EDGE for $161.13.

The national reporting system is a bit slow, and so a short period of time elapses before it reports Lowprice’s new, better offer. Lightning’s co-location facility at EDGE very quickly learns of the new $161.13 offer, however, and an algorithm sends an ultra-fast message to Lightning’s co-location facility at NYSE informing it of the new offer. During the reporting gap, though, Lightning keeps posted its $161.15 offer. Next, Ms. Stumble sends a marketable buy order to NYSE for 1000 IBM shares. Lightning’s $161.15 offer remains the official NBO, and so Stumble’s order transacts against it. Lightning’s co-location facility at NYSE then sends an ultra-fast message to the one at EDGE instructing it to submit a 1000 share marketable buy order there. This buy order transacts against Lowprice’s $161.13 offer. Thus, within the short period before the new $161.13 offer is publicly reported, Lightning has been able to sell 1000 IBM shares at $161.15 and purchase them at $161.13, for what appears to be a $20 profit. 
This practice hurts both informed and uninformed traders, and is a clear example of what I have elsewhere called superfluous financial intermediation. According to the authors this practice would have negative welfare effects even if it did not require the investment of real resources.

In discussing wealth transfer, the authors argue that "Ms. Stumble... would have suffered the same fate if Lightning had not engaged in slow market arbitrage because that course of action would have also left the $161.15 offer posted on NYSE and so Stumble’s buy order would still have transacted against it." While this is true under existing order execution rules, note that it would not be true if orders were processed in the sequence in which they make first contact with the market. 

Finally, consider mid-point order exploitation:
A trader will often submit to a dark pool a “mid-point” limit buy or sell order, the terms of which are that it will execute against the next marketable order with the opposite interest to arrive at the pool and will do so at a price equal to the mid-point between the best publicly reported bid and offer at the time of execution. Mid-point orders appear to have the advantage of allowing a buyer to buy at well below the best offer and sell well above the best bid. It has been noted for a number of years, however, that traders who post such orders are vulnerable to the activities of HFTs… Mid-point order exploitation again involves an HFT detecting an improvement in the best available bid or offer on one of the exchanges before the new quote is publicly reported. The HFT puts in an order to transact against the new improved quote, and then sends an order reversing the transaction to a dark pool that contains mid-point limit orders with the opposite interest that transact at a price equal to the mid-point between the now stale best publicly reported bid and offer…

Let us bring back again our HFT, Lightning. Suppose that the NBO and NBB for IBM are $161.15 and $161.11, respectively, and each are for 1000 shares and are posted on NYSE by HFTs other than Lightning. Then the $161.15 offer is cancelled and a new 1000 share offer is submitted at $161.12. Lightning, through its co-location facilities at NYSE, learns of these changes in advance of their being publicly reported. During the reporting gap, the official NBO remains $161.15.

Lightning knows that mid-point orders for IBM are often posted on Opaque, a well known dark pool, and Lightning programs its algorithms accordingly. Because Opaque does not disclose what is in its limit order book, Lightning cannot know, however, whether at this moment any such orders are posted on Opaque, and, if there are, whether they are buy orders or sell orders. Still there is the potential for making money.

Using an ultra-fast connection between the co-location facility at NYSE and Opaque, a sell limit order for 1000 shares at $161.13 is sent to Opaque with the condition attached that it cancel if it does not transact immediately (a so-called “IOC” order). This way, if there was one or more mid-point buy limit orders posted at Opaque for IBM, they will execute against Lightning’s order at $161.13, half way between the now stale, but still official, NBB of $161.11 and NBO of $161.15. If there are no such mid-point buy orders posted at Opaque, nothing is lost.

Assume that there are one or more such mid-point buy orders aggregating to at least 1000 shares and so Lightning’s sell order of 1000 shares transacts at $161.13. Lightning’s co-location facility at NYSE is informed of this fact through Lightning’s ultra-fast connection with Opaque. A marketable buy order for 1000 shares is sent almost instantaneously to NYSE, which transacts against the new $161.12 offer. Thus, within the short period before the new $161.12 offer on NYSE is publicly reported, Lightning has been able to execute against this offer, purchase 1000 IBM shares at $161.12, and sell them at $161.13, for what appears to be a $10.00 profit. 
As in the case of slow market arbitrage, this hurts informed and uninformed traders alike. 

The three activities discussed above all stem from the fact that trading in the same securities occurs across multiple exchanges, and market data is available to some participants ahead of others. The authors argue that a simple regulatory change could make all three practices infeasible:
We think there is an approach to ending HFT information speed advantages that is simpler both in terms of implementation and in terms of achieving the needed legal changes. None of these three practices would be possible if private data feeds did not make market quote and transaction data effectively available to some market participants before others. Thus, one potential regulatory response to the problem posed by HFT activity is to require that private dissemination of quote and trade information be delayed until the exclusive processor under the Reg. NMS scheme, referred to as the “SIP,” has publicly disseminated information from all exchanges.
Rule 603(a)(2) of Reg. NMS prohibits exchanges from “unreasonably discriminatory” distribution of market data… Sending the signal simultaneously to an HFT and to the SIP arguably is “unreasonably discriminatory” distribution of core data to the end users since it is predictable that some will consistently receive it faster than others… Interestingly, this focus on the time at which information reaches end users rather than the time of a public announcement is the approach the courts and the SEC have traditionally taken with respect to when, for purposes of the regulation of insider trading, information is no longer non-public. Thus the SEC’s ability to alter its interpretation of Rule 603(a)(2) may be the path of least legislative or regulatory resistance to prohibiting electronic front-running. 
There’s an even simpler solution, however, and that is to process each order in full in the precise sequence in which it makes first contact with the market. That is, if two orders reach an exchange in quick succession, they should be processed not in the order in which they reach the exchange but rather the order in which they have reached any exchange. Failing this, I don't see how we can be said to have a "national market system" at all.

Friday, April 03, 2015

Prediction Market Design

The real money, peer-to-peer prediction market PredictIt is up and running in the United States. Modeled after the pioneering Iowa Electronic Markets, it offers a platform for the trading of futures contracts with binary payoffs contingent on the realization of specified events. 

There are many similarities to the Iowa markets: the exchange has been launched by an educational institution (New Zealand's Victoria University), is offered as an experimental research facility rather than an investment market, operates legally in the US under a no-action letter from the CFTC, and limits both the number of traders per contract and account size. While there are no fees for depositing funds or entering positions, the exchange takes a 10% cut when positions are closed out at a profit and charges a 5% processing fee for withdrawals. (IEM charges no fees whatsoever.) 

While still in beta and ironing out some software glitches, trading is already quite heavy with bid-ask spreads down to a penny or two in some contracts referencing the 2016 elections. Trading occurs via a continuous double auction, but the presentation of the order book is non-standard. Here, for instance, is the current book for a contract that pays off if the Democratic nominee wins the 2016 presidential election:


To translate this to the more familiar order book representation, read the left column as the ask side and subtract the prices in the right column from 100 to get the bid side.

There's one quite serious design flaw, which ought to be easy to fix. Unlike the IEM (or Intrade for that matter), short positions in multiple contracts referencing the same event are not margin-linked. To see the consequences of this, consider the prices of contracts in the heavily populated Republican nomination market:

These are just 10 of the 17 available contracts, and the list is likely to expand further. According to the prices at last trade, there's a 102% chance that the nominee will be Bush, Walker or Rubio, and a 208% chance that it will be one of these ten. If one buys the "no" side of all ten contracts at the quoted prices at a cost of $8.10, a payoff of at least $9 is guaranteed, since the party will have at most one nominee. That's a risk-free return of at least 11% over a year and a quarter. 

This mispricing would vanish in an instant if the cost of buying a set of contracts were limited to the worst case loss, as indeed was the case on Intrade (IEM allows the purchase and sale of bundles at par, which amounts to the same thing.) Then buying the "no" side of the top two contracts would cost $0.26 instead of $1.26, and shorting the top three would be free.

If the exchange were to manage a switch to margin-linked shorts, all those currently holding no positions would make a windfall gain as prices snap into line with a no-arbitrage condition. Furthermore, algorithmic traders would jump into the market to exploit new arbitrage opportunities as they appear. Such algos have been active on IEM for a while now, and were responsible for a good portion of transactions on Intrade.

Despite this one design flaw, I expect that these markets will be tracked closely as the election approaches, and that liquidity will increase quite dramatically. This despite the fact that traders are entering into negative-sum bets with each other, which ought to be impossible under the common prior assumption. The arbitrage conditions will come to be more closely approximated as the time to contract expiration diminishes, especially in markets with few active contracts (such as that for the winning party). But unless the flaw is fixed, the translation of prices into probabilities will require a good deal of care.

Sunday, March 29, 2015

A Separating Equilibrium in Indiana

In the wake of Indiana's passage of the Religious Freedom Restoration Act, the following stickers have started appearing on storefronts across the state:

These signs allow business owners to signal their disapproval of the law, and if they spread sufficiently far and wide, will force those not displaying them to implicitly signal approval of the law. It's worth reflecting on the consequences of this for customer choices, the profitability of firms, and the beliefs of individuals about the preferences of those with whom they occasionally interact.

At any given location, the meaning of the symbol will come to depend on the number and characteristics of the nearby firms displaying it. If all businesses were to paste the sticker alongside their Visa and Mastercard logos, it would be devoid of informational content and would not influence customer choices; this is what game theorists quaintly call a babbling equilibrium

But it's highly unlikely that such a situation would arise. Some owners will display the sign as a matter of principle, regardless of it's effect on their bottom line, while others will adamantly refuse to do to even if profitability suffers as a result. 

Between these extremes lies a large segment of firms for whom the choice involves a trade-off between profit and principle. They may disapprove of the law and yet abstain from taking a public position, or they may approve and cynically pretend to disapprove. What they choose will depend on the distribution of characteristics in their customer base, as well as the choices made by other firms. 

In more liberal areas, such as college towns, those who display the stickers will likely profit from doing so, and owners concerned primarily with their profitability will be induced to join them. The meaning of the symbol will accordingly be diluted: some of those displaying it will be indifferent to the law or even mildly supportive. By the same token, the meaning of not displaying the symbol will be sharpened. Customers will sort themselves across businesses accordingly, with those opposed to the law actively avoiding businesses without stickers, thus reinforcing the effects on profitability and firm behavior.

In more conservative areas, those who display the stickers will likely experience a net loss of customers, and the meaning of the symbol will accordingly be quite different. Only those strongly opposed to the law will publicly exhibit their disapproval, and among those who abstain from displaying the stickers will be some who are privately opposed to the law. In this case customers opposed to the law will be less vigorous in seeking out businesses with stickers, again reinforcing the effects on profitability and firm behavior.  

Just as customers will come to know more about the private preferences of business owners, the owners will come to know more about the customers they attract and retain. Furthermore, customers in a given store will come to know more about each other. Bars and bakeries will become a bit more like niche bookstores, and casual interactions will become a bit more segregated along ideological lines. None of these are intended consequences of the law, but they are some of its predictable effects, and it's worth giving some thought to whether or not they are desirable.

I've heard it said that businesses in Indiana had the authority to deny service to some customers even prior to the passage of the new law, and that it therefore doesn't involve any substantive change in rights. Even so, it's a symbolic gesture that pins upon a group of people a badge of inferiority. Responding to this with a different set of symbols thus seems entirely appropriate.

Friday, December 19, 2014

Coordination, Efficiency, and the Coase Theorem

A recent post by Matt Levine starts out with the following observation:
A good general principle in thinking about derivatives is that real effects tend to ripple out from economic interests. This is not always true, and not always intuitive: If you and I bet on a football game, that probably won't affect the outcome of the game. But most of the time, in financial markets, it is a mistake to think of derivatives as purely zero-sum, two-party bets with no implications for the underlying thing. Those bets don't want to stay in their boxes; they want to leak out and try to make themselves come true.
Now one could object that you and I can't affect the outcome of a sporting event because neither of us is Pete Rose or Hansie Cronje, and that we can't affect credit events with our bets either. But this would be pedantic, and miss the larger point. Levine is arguing that the existence of credit derivatives creates incentives for negotiated actions that result in efficient outcomes; that the "Coase Theorem works pretty well in finance." 

To make his point, Levine draws on two striking examples in which parties making bets on default using credit derivatives spent substantial sums trying to make their bets pay off, using the anticipated revenues to subsidize their efforts. In one case a protection buyer provided financing on attractive terms for the reference entity (Codere), under the condition that it delay an interest payment, thus triggering a credit event and resulting in a payout on the bet. In the other case, a protection seller offered financing to the reference entity (Radio Shack) in order to help it meet contractual debt obligations until the swaps expire. The significance of these examples, for Levine, is that they are on opposite sides of the market: "the two sides can manipulate against each other, and in expectation the manipulations and counter-manipulations will cancel each other out and you'll get the economically correct result." 

Well, yes, if we lived in a world without transactions costs. Such a world is sometimes called Coasean, but it would be more accurate to describe it as anti-Coasean. The world of zero transactions costs that Coase contemplated in his classic paper was a thought experiment designed to illustrate certain weaknesses in the neoclassical method, especially as it pertains to the analysis of externalities. But the world in which these deals were made is one in which transactions costs are significant and pervasive. Given this, what do the examples really teach us? 

Transactions costs arise from a broad range of activities, including the negotiation and enforcement of contracts, and the coordination of efforts by multiple interested parties. In two party settings (such as the case of Sturges v. Bridgman explored by Coase) these costs can be manageable, since little coordination is required. But once multiple parties are involved transactions costs can quickly become prohibitive, in part because no stable agreement may exist. And as Levine himself usefully informs us, "there are a lot of credit default swaps outstanding on Radio Shack's debt, now about $26 billion gross and $550 million net notional." 

The two sides of this market are populated by multiple firms, each with different stakes in the outcome. For a single party on one side of the market to negotiate a deal with the reference entity requires that its position be large, especially in relation to those on the opposite side of the trade. The resulting outcome will reflect market structure and the distribution of position sizes rather than the overall gains from trade. The examples therefore point not to the relevance of the Coase Theorem, which Coase himself considered largely irrelevant as a descriptive claim, but rather to the fact that coordination trumps efficiency in finance.