Sunday, April 10, 2016

Fee-Structure Distortions in Prediction Markets

Since the launch of the pioneering Iowa Electronic Markets almost thirty years ago, prediction markets have grown to become a familiar fixture in the forecasting landscape. Among the most recent entrants is PredictIt, which has been operating for about a year under a no-action letter from the CFTC.

Both IEM and PredictIt offer contracts structured as binary options: if the referenced event occurs, the buyer of the contract gets a fixed payment at the expense of the seller, and otherwise gets nothing. The price of the contract (relative to the winning payment) may then be interpreted as a probability; an assessment by the "market" of the likelihood that the event will occur. These probabilities can be calibrated against actual outcomes over multiple events, and compared with survey and model based forecasts. Comparisons of this kind have generally found the forecasting performance of markets to be superior on average to those based on more traditional methods.

But interpreting prices as probabilities requires, at a minimum, that the set of prices referencing mutually exclusive outcomes sum to at most one. This condition is routinely violated on PredictIt. For instance, in the market for the presidential election winner by party, we currently have:


Based on the prices at last trade, there is an absurd 108% likelihood that someone or other will be elected president. Furthermore, the price of betting against all three listed outcomes (by buying the corresponding no contracts) is $1.96, even though the payout from this bundle is sure to be $2.00. Since these contracts are margin-linked (the exchange only requires a trader to post his or her worst-case loss) the cost of buying this bundle would be precisely zero in the absence of fees, and this would be as pure an opportunity for arbitrage as one is likely to find.

On IEM, or the now defunct Intrade, such a pattern of pricing would never be observed except perhaps for an instant. The discrepancy would be spotted by an algorithm and trades executed until the opportunity had been fully exploited. Profits would be small on any given trade, but would add up quickly: the most active account on Intrade during the last presidential election cycle traded close to four million contracts for a profit of $62,000 with minimal risk and effort. This trader had a median holding period of zero milliseconds. That is, the trader typically sold multiple candidate contracts simultaneously (with the trades having identical timestamps) in a manner that could not possibly have been done manually.

Why don't we see this in PredictIt? The simple answer is the fee structure. Whenever a position is closed at a profit the exchange takes 10% of the gains; losing trades don't incur fees. Taking account of this fee structure, the worst-case outcome for a trader betting against all three outcomes in the example above would be a win by someone other than a major party nominee. In this case the trader would lose $0.95 and gain $0.99, incurring fees on the latter of around ten cents. The result would be a net loss rather than a gain, and hence no opportunity for arbitrage. Prices could remain at these levels indefinitely.

Still, algorithmic arbitrage can prevent prices from getting too far out of line with meaningful probabilities. The extent to which this happens depends on whether the events in question include some that are considered highly unlikely. In a market with only two possibilities (such as that referencing confirmation of Merrick Garland) price distortion will be lowest if both outcomes are considered equally likely. For instance, if the prices of the two contracts were each 53, betting against both would cost 94, and fees would be a shade above 5 no matter what happens. These prices could not be sustained, so the distortion would be at most 5%.   

But in the same market, prices of 99 and 10 for the two outcomes could be sustained, for a distortion of 9%. The cost of betting against both would be 91 but if the less likely outcome occurs, the fee would wipe out all gains. Hence no opportunity for arbitrage, and no pressure on prices to change. 

Given that PredictIt is operating as an experimental research facility with the purpose of generating useful data for academic research, this situation is unfortunate. It would be easy for the exchange to apply fees only to net profits in a given market, after taking account of all losses and gains, as suggested here. This does not require any change in the manner in which margin is calculated at contract purchase, only a refund once the market closes. If this is done, prices should snap into line and begin to represent meaningful probabilities. The decline in revenue would be partially offset by increased participation. And the transition itself would generate interesting data for researchers, consistent with the stated mission of the enterprise.

Saturday, March 19, 2016

Does Market Microstructure Matter?

The Securities and Exchange Commission has decided to delay for a second time ruling on the application by IEX to register as a national securities exchange. This time they did so without seeking or receiving permission from the applicant, on the grounds that a decision requires clarification of their own order protection rule. Accordingly, they have posted a notice of proposed interpretation and invited the general public to submit comment letters.
 
The key passage in the notice is the following:
Specifically, the Commission preliminarily believes that, in the current market, delays of less than a millisecond in quotation response times may be at a de minimis level that would not impair a market participant’s ability to access a quote, consistent with the goals of Rule 611 and because such delays are within the geographic and technological latencies experienced by market participants today... permitting the quotations of trading centers with very small response time delays, such as those proposed by IEX, to be treated as automated quotations, and thereby benefit from trade-through protection under Rule 611, could encourage innovative ways to address market structure issues.

Accordingly, the Commission today is proposing to interpret “immediate” when determining whether a trading center maintains an “automated quotation” for purposes of Rule 611 of Regulation NMS to include response time delays at trading centers that are de minimis, whether intentional or not.
If this proposed interpretation is sustained, it seems to me that the application would have to be approved. But perhaps I'm not being cynical enough. There will certainly be a flurry of comment letters from those whose current business models are threatened by the entry of IEX, and it's possible that the delay is intended to provide cover for a change in interpretation on the basis of which the application will eventually be rejected.

But one thing I find encouraging about the notice is that it seems to find persuasive two excellent comment letters by RT Leuchtkafer (I flagged the second when it was submitted but had missed the first). Among the many points made in these letters is the following: if an intentional delay in allowing traders access to quotes is a violation of Regulation NMS, then the entire system of co-location services, differential access speeds, and proprietary data feeds would need to end. Here's the logic of the argument:
If deliberately slower access is an "intentional device that would delay the action taken with respect to a quotation," as the IEX Critics' reasoning certainly implies, the problem isn't just that all the major exchange groups use "delay coils" to equalize access within their data centers. The problem is that you have to pay to get into their data centers in the first place, and if you don't it sure looks like you are intentionally delayed compared to those who can and do pay.

It gets worse! Even within exchange data centers, exchanges charge fees depending on the speed of your connections. A 10gb connection is certainly delayed with an "intentional device" you know, routers, switches and the like relative to a much more expensive 40gb connection, especially when the faster connection is priced out of all proportion to its actual cost, especially when the public SIP feeds have average delays of 500 microseconds to one millisecond and the SEC's own statistics show that billions of quotes are stale before they are ever broadcast by the SIPs.

Where does this logic take us? I naturally started to wonder that if the IEX Critics are right, by their own reasoning the exchanges will have to dismantle their co-location facilities and stop offering tiered high-speed network facilities. They are selling faster access to their markets, and if you don't pay, aren't you slower than you could be, aren't you intentionally delayed?
The critics might want to be careful what they wish for. 

I am on the record in support of the IEX application, and hope that the interpretation proposed in the notice is indeed sustained. The IEX design prevents trading based on information from an order that has been partially filled but not fully processed. It therefore moves us closer to a true national market system, in the sense that orders are processed in full in the sequence in which they make first contact with market.

But does all this really matter, except to those whose interests are directly at stake? I believe it does, because the rules governing transactions in asset markets affect the relative profitability of different trading strategies, and this in turn has consequences for share price accuracy and volatility, the allocation of capital across competing uses, the costs of financial intermediation, and the returns to ordinary investors.

There is an extremely diverse set of participants in the secondary market for stocks, with significant differences in goals, investment horizons, and trading strategies. It is useful to group these into three broad categories: (a) long-term investors, who save during peak earning years and liquidate assets to finance consumption during retirement (b) information traders, who seek to profit from deviations between prices and their private estimates of fundamental values, and (c) high-frequency traders, who combine a market-making function with arbitrage and short-term speculation based on rapid responses to incoming market data.

There is clearly a lot of overlap between these categories. For instance, actively managed mutual funds and some hedge funds belong to the second category but often manage money for long-term investors, pension funds, or university endowments.

The traditional market making function involves the placement of passive orders that provide liquidity to the rest of the market. Such passive order placement is subject to adverse selection: if a posted offer to buy or sell is met by an information trader the market maker will suffer losses on average. In order for a market making strategy to be profitable, these losses have to be matched by gains elsewhere. Where do these gains come from?

In standard models of market-making, the bid-ask spread is determined by a balance between losses from transactions with information traders and gains from transactions against those with price-insensitive demands. But this is not the balance that exists in markets today. Instead, high-frequency traders combine passive liquidity provision with aggressive liquidity-taking strategies based on the near instantaneous receipt, processing, and reaction to market data. The posting of bids and offers is motivated less by profiting from the spread than by fishing for information, which can then be used to take and quickly reverse directional positions. The relative weights on passive liquidity provision and aggressive short-term speculation varies considerably across firms, but there is evidence that the most aggressive and profitable among these are able to effectively forecast price movements over very short horizons.

A transition to a truly national market system will affect the competitive balance between information traders and high-frequency traders. It is in the interests of the former to prevent information leakage so that they can build large positions with limited immediate price impact. It is in the interest of the latter to extract this information from market data and trade on it before it has been fully incorporated into prices. Other things equal, the ability to extract information from a partially filled order and trade ahead of it at other exchanges benefits high-frequency traders at the expense of information traders. A truly national market system would mitigate this advantage.

This means, of course, that high-frequency traders would be more vulnerable to adverse selection and would place a lower volume of passive orders to begin with. But the orders would be genuinely available, and not subject to widespread cancellation or poaching if one of them were to trade. Visible bid-ask spreads may widen but there would be no illusion of liquidity.

The shift in competitive balance between these trading strategies would have broader economic implications. The returns to investment in fundamental information would rise relative to the returns to investment in speed, which should result in greater share price accuracy.  Furthermore, there is a real possibility that the aggregate costs of financial intermediation would decline, as expenditures on co-location, rapid data processing and transmission, equipment, energy, and programming talent are scaled back. This would be a desirable outcome from the perspective of long-term investors. After all:
It is the iron law of the markets, the undefiable rules of arithmetic: Gross return in the market, less the costs of financial intermediation, equals the net return actually delivered to market participants.
Finally, extreme volatility events should arise less often. Algorithms making short-term price forecasts may predict well on average but they will sometimes mistake a random fluctuation for a large order imbalance. Such false positives can give rise to a hot potato effect, of the kind that is believed to have been in play during the flash crash.  Of course, such events can occur even in the absence of market fragmentation, and cannot be prevented entirely, but a transition to a true national market system should reduce their amplitude and frequency. 

For these reasons and more, approval of the IEX application would be a modest but meaningful step in the right direction.

Monday, March 07, 2016

Systematic Biases in Prediction Market Forecasts

On Super Tuesday, and then again on March 5, there were systematic biases in prediction market forecasts. Specifically, Donald Trump lost four contests that he was predicted to win (Oklahoma, Alaska, Minnesota and Maine) and won no contests that he was predicted to lose (Texas and Kansas).

Another way to express this is as follows: if someone had bet that Trump would lose all eleven states on Super Tuesday at the prevailing prices, they would have secured a substantial positive return, approximately doubling their money, even though he actually won seven states. Each bundle of such bets, one for each state, would have cost about $2 and paid out $4 for a 100% return. On the other hand, if they had bet that Trump would win all eleven states they would have lost money, paying about $9 per bundle to get back $7. And there was enough liquidity available to scale up these bets quite substantially.

The pattern repeated itself on March 5: betting on Trump losses across the board would have made quite a bit of money, betting on him to win everything would have been a money-losing proposition. This was true even though he won two of four states, since betting on him to win was much costlier in the aggregate than betting on him to lose. 

Yet another way to say this is that the markets were not terribly well-calibrated. But this ought to be a temporary phenomenon as wealth is transferred across accounts and traders update their beliefs after each outcome realization.

It's election day again tomorrow, which gives us an opportunity to see if such a correction has in fact occurred. Four states are in play on the Republican side, with Trump heavily favored to win Michigan and Mississippi:


If he fails to win either one of these states it would suggest to me that the biases previously evident have not yet been eliminated. 

As far as the other two contests are concerned, Cruz is favored in Idaho, with Rubio (marginally) favored in Hawaii:


How do these predictions compare with more traditional poll and model based forecasts? On the Republican side, all we have is a Michigan forecast from FiveThirtyEight:


This is in substantial agreement with the prediction markets, so the outcome will not help adjudicate between the two approaches. Similarly, on the Democratic side, there is negligible difference in forecasts: the prediction markets and FiveThirtyEight both agree that Clinton is heavily favored to win Michigan and Mississippi. 

One of the most striking facts that David Rothschild and I uncovered in our analysis of Intrade data from the 2012 election was that the overwhelming majority of traders bet in only one direction. They could be partitioned into Obama enthusiasts and Romney enthusiasts, changing their exposure over time in response to news, but never switching entirely from one side to the other. (These categories are based only on beliefs about the eventual outcome, as represented in bets placed, and need not correspond to political preferences.)

If this pattern holds for the contests currently underway, then poorly calibrated forecasts could be a consequence of over-representation in the market of Trump enthusiasts, or a willingness of such individuals to place larger bets. Even so, the bias should be self-correcting over time, as the wealth of those making consistently losing bets is gradually depleted.

---

Update (March 9). The results are in and it's safe to say that there was no hint of bias in favor of Trump this time: he won Michigan and Mississippi handily and Hawaii became the first state that he won while being predicted to lose. The big surprise, of course, was on the Democratic side where Sanders prevailed over Clinton in Michigan. This outcome was given a likelihood of less than one in ten by FiveThirtyEight and prediction markets, and was even mistakenly called for Clinton at 9pm ET last  night:


So we still have little to choose from when comparing markets to poll-based predictions, with Kansas being the only state called differently (correctly by markets and incorrectly by FiveThirtyEight). 

The other big news of course was the failure of the Rubio campaign to secure even a single delegate (as of this writing), while Kasich managed to get 17. Even before polls opened on March 5, I wrote:
I suspect that there is a non-negligible probability that Rubio may exit the race before Florida to avoid humiliation there, while Cruz and Kasich survive to the convention.
There's been a lot of chatter about this possibility over the past couple of days, and it seems increasingly likely to me. A Rubio exit before Florida could tip Ohio to Kasich and set up an interesting and unpredictable three-person race going forward. Ohio will also be critical for the continued viability of Sanders. It's a fascinating election cycle.

Saturday, March 05, 2016

Forecasting Elections

This wild and crazy election cycle is generating an enormous amount of data that social scientists will be pondering for years to come. We are learning about the beliefs, preferences, and loyalties of the American electorate, and possibly witnessing a political realignment of historic proportions. Several prominent republicans have vowed not to support their nominee if it it happens to be Trump, while a recent candidate for the Democratic nomination has declared a preference for Trump over his own party's likely nominee. Crossover voting will be rampant come November, but the flows will be in both directions and the outcome remains quite uncertain. 

Among the issues that the emerging data will be called upon to address is the accuracy of prediction markets relative to more conventional poll and model based forecasts. Historically such markets have performed well, but they have also been subject to attempted manipulation, and this particular election cycle hasn't really followed historical norms in any case.

On Super Tuesday, the markets predicted that Trump would prevail in ten of the eleven states in play, with the only exception being a Cruz victory in his home state of Texas. This turned out to be quite poorly calibrated, in the sense that all errors were in a single direction: the misses were Oklahoma and Alaska (which went to Cruz) and Minnesota (where Rubio secured his first victory). But the forecasters at FiveThirtyEight also missed Oklahoma and were silent on the other two so no easy comparison is possible. 

Today we have primaries in a few more states, and another opportunity for a comparison. I'll focus on the Republican side, where voting will occur in Kansas, Kentucky, Louisiana and Maine. Markets are currently predicting a Cruz victory in Kansas (though the odds are not overwhelming):


In contrast, FiveThirtyEight gives the edge to Trump, though again it's a close call:


The only other state for which we have predictions from both sources is Louisiana, but here there is negligible disagreement, with Trump heavily favored to win. Trump is also favored by markets to take Kentucky and Maine, for which we have limited polling data and no predictions from FiveThirtyEight. 

So one thing to keep an eye out for is whether Trump wins fewer than three of the four states. If so, the pattern of inflated odds on Super Tuesday will have repeated itself, and one might be witnessing a systematically biased market that has not yet been corrected by new entrants attracted by the profit opportunity. 

But if the market turns out to be well-calibrated, then it's hard to see how Rubio could possibly secure the nomination. Here's the Florida forecast as of now:


The odds of a Trump victory in Michigan are even higher, while Kasich is slightly favored in Ohio. Plenty of things can change over the next couple of weeks, but based on the current snapshot I suspect that there is a non-negligible probability that Rubio may exit the race before Florida to avoid humiliation there, while Cruz and Kasich survive to the convention. This is obviously not the conventional wisdom in the media, where Rubio continues to be perceived as the establishment favorite. But unless things change in a hurry, I just don't see how this narrative can be sustained.

---

Update (March 5). The results are in, with Cruz taking Kansas and Maine and Trump holding on to Kentucky and Louisiana. The only missed call by the prediction markets was therefore Maine. Still, the significant margins of victory for Cruz in Kansas and Maine suggest to me that traders in the aggregate continue to have somewhat inflated expectations regarding Trump's prospects. And I'm even more confident than I was early this morning that Rubio faces a humbling and humiliating loss in his home state of Florida, though he may have no option now but to soldier on.

Tuesday, March 01, 2016

Super Tuesday

It's Super Tuesday, and if the polls and prediction markets aren't completely off base, Donald J. Trump is heading for a significant and perhaps insurmountable delegate lead in the contest for the Republican nomination. According to PredictIt, he is heavily favored to win all states except Texas, in which Cruz continues to have an edge. His likelihood of winning exceeds 90% in Virginia, Georgia, Oklahoma, Massachusetts, Vermont, Alabama, Tennessee and Alaska. He is also favored to win Arkansas and Minnesota, though there is somewhat less certainty about these.

The forecasts at FiveThirtyEight, based on polls and fundamentals, are a bit less skewed but tell a similar story.

The conventional wisdom seems to be that this is good news for the Democrats. For example:

This seems very premature, and is quite inconsistent with prediction market prices. Currently Trump is given an 83% chance of securing the nomination and a 38% chance of winning it all:




His probability of winning conditional on being nominated is accordingly not far below one-half.

Are the markets completely wrong or are pundits and prognosticators missing something important?

It seems to me that a major political realignment is underway in America. The press has focused on prominent Republicans who could not support Trump under any circumstances, such as Senator Sasse of Nebraska. Some of these will sit out the election or look for a third option; some may consider crossing over. But there will also be crossover votes in the other direction:
Nearly 20,000 Bay State Democrats have fled the party this winter, with thousands doing so to join the Republican ranks, according to the state’s top elections official. Secretary of State William Galvin said more than 16,300 Democrats have shed their party affiliation and become independent voters since Jan. 1, while nearly 3,500 more shifted to the MassGOP ahead of tomorrow’s “Super Tuesday” presidential primary...  The primary reason? Galvin said his “guess” is simple: “The Trump phenomenon,” a reference to GOP frontrunner Donald Trump, who polls show enjoying a massive lead over rivals Marco Rubio, Ted Cruz and others among Massachusetts Republican voters.
This phenomenon is unlikely to change the outcome in Massachusetts come November, but it could be enough to affect New Jersey, Pennsylvania or Ohio. In any case, the traditional lines between red and blue states are going to become increasingly blurred, with highly unpredictable net effects.

Perhaps the prediction markets are wrong on this point, skewed and shifted by Trump enthusiasts. I would certainly prefer it if that were the case. But I suspect that the prediction market crowd is on to something, and there is peril in ignoring it.

---

Update (March 2). The results are in the books, with Trump winning seven of the predicted ten, losing Oklahoma and Alaska to Cruz and Minnesota to Rubio. Cruz won his home state as predicted. Since the markets systematically overestimated Trump's performance, the results should have lowered his odds in both the nominee and the presidential winner markets. And indeed this is what happened:


But here's the thing. Trump's odds of winning the presidency conditional on being nominated did not decline, consistent with the argument I made above. And since he remains the overwhelming favorite for the nomination, it's worth keeping this in mind.

Sunday, January 10, 2016

College Sports and Deadweight Loss

The amount of money generated by college sports is staggering: broadcast rights alone are worth over a billion dollars annually, and this doesn't include tickets sales for live events, revenue from merchandise, or fees from licensing. But the athletes on whose talent and effort the entire enterprise is built get very little in return. As Donald Yee points out in a recent article, these athletes are "making enormous sums of money for everyone but themselves." Even the educational benefits are limited, with "contrived majors" built around athletic schedules and terribly low graduation rates.

Since colleges cannot compete for athletes by bidding up salaries, they compete in absurd and enormously wasteful ways:
Clemson’s new football facility will have a miniature-golf course, a sand volleyball pit and laser tag, as well as a barber shop, a movie theater and bowling lanes. The University of Oregon had so much money to spend on its football facility that it resorted to sourcing exotic building materials from all over the world.
The benefit that athletes (or anyone else for that matter) derives from exotic building materials used for this purpose are negligible in relation to the cost. Only slightly less wasteful are the bowling lanes and other frills at the Clemson facility. The intended beneficiaries would be much better off if they were to receive the amounts spent on these excesses in the form of direct cash payments. This squandering of resources is what economists refer to as deadweight loss.

But are competitive salaries really the best alternative to the current system? I think it's worth thinking creatively about compensation schemes that could provide greater monetary benefits to athletes while also improving academic preparation more broadly. Here's an idea. Suppose that athletes are paid competitive salaries but (with the exception of an allowance to cover living expenses) these are held in escrow until successful graduation. Upon graduation the funds are divided, with one-half going to the athlete as taxable income, and the rest distributed on a pro-rata basis to each primary and secondary school attended by the athlete prior to college. A failure to graduate would result in no payments to schools, and a reduced payment to the athlete.

This would provide both resources and incentives to improve academic preparation as well as athletic development at schools. Those talented few who make it to the highest competitive levels in college sports would clearly benefit, since their compensation would be in cash rather than exotic building materials. But the benefits would extend to entire communities, and link academic and athletic performance in a manner both healthy and enduring. It's admittedly a more paternalistic approach than pure cash payments, but surely less paternalistic than the status quo.

Monday, January 04, 2016

The Order Protection Rule

The following is a lightly edited version of my comment letter to the SEC in reference to the application by IEX to register as a national securities exchange. Related issues were discussed in a couple of earlier posts on intermediation in fragmented markets and in this piece on spoofing in an algorithmic ecosystem. 

---

In 1975, Congress directed the SEC, “through enactment of Section 11A of the Exchange Act, to facilitate the establishment of a national market system to link together the multiple individual markets that trade securities.” A primary goal was to assure investors that “that they are participants in a system which maximizes the opportunities for the most willing seller to meet the most willing buyer.”

To implement this directive the SEC instituted Regulation NMS, a centerpiece of which is the Order Protection Rule:
The Order Protection Rule (Rule 611 under Regulation NMS) establishes intermarket protection against trade-throughs for all NMS stocks. A trade-through occurs when one trading center executes an order at a price that is inferior to the price of a protected quotation, often representing an investor limit order, displayed by another trading center…. strong intermarket price protection offers greater assurance, on an order-by-order basis, that investors who submit market orders will receive the best readily available prices for their trades. 
To a layperson, the common sense meaning of a National Market System and the Order Protection Rule is that an arriving marketable order (say Order A) should be matched with the best readily available price in the market as a whole, before any order that is placed after Order A has made first contact with the market begins to be processed.

Given the large number of trading venues now in operation and the speeds at which communication occurs, it is important to be very clear about what these terms mean. If a marketable order arrives at an exchange, is partially filled, and then routed to another exchange, there will be a small gap in time before the second exchange receives what is left of the order. It is technologically possible for a third party to observe the first trade (either because they are a counterparty to it or have access to the data generated by it) and to act upon this information by sending orders to other exchanges. These may be orders to trade or to cancel, and may arrive at other exchanges before the first order has been fully processed.

Should these new orders, placed after Order A has made first contact with the market, be given priority over Order A in interacting with resting orders at other exchanges? It seems to me that the plain meaning of Congress’ directive and the order protection rule says that they should not.

IEX’s proposed design prevents this kind of event from taking place by delaying the dissemination of information generated by Order A’s first contact with the market until enough time has elapsed for the order to be fully processed. This brings the market closer to the national system envisaged by Congress, and indeed by the SEC itself.

It appears that the following example in a comment letter by Hudson River Associates, while submitted as an objection to the IEX application, actually supports this interpretation:
Example 3: IEX BD Router – IEX bypasses the POP allowing it beat a member to another exchange
  • Member C has an order to buy at 10.00 resting on IEX. 
  • IEX has a routable sell order that fully executes Member C’s buy interest on IEX. 
  • When executed, Member C decides to update its buy order prices on another exchange from 10.00 to 9.99. 
  • The POP would delay Member C’s execution information by 350 microseconds. As a result, although Member C’s buy order on IEX has been executed, it does not know this for at least 350 microseconds. 
  • Before Member C is informed of its buy order execution, the IEX BD Router sends an order to the other exchange to execute against Member C’s buy order at 10.00 on the other exchange. 
  • Since Member C was not informed of its execution on IEX, its order at 10.00 on the other exchange is executed by the IEX BD Router before Member C can update the price to 9.99. 
This example refers to cancellation, but there is nothing to prevent Member C from placing marketable sell orders at 10.00 that trade ahead of the routable order. In either case, liquidity that was “readily available” when the routable sell order made first contact with the market is removed before this order has been fully processed.

What the author of this letter appears to want is that Member C should be able to place an order (to cancel or trade) after the routable sell order has made first contact with the market, and to have these orders interact with the market before the routable sell order has been fully processed. This kind of activity is currently permitted by the SEC, but to me seems to clearly violate the spirit if not the letter of Congress’ directive.

The design proposed by IEX, by preventing orders from trading out of sequence (measured with respect to first contact with the market) would bring the system closer to that envisaged by Congress. In a true national market system with multiple exchanges, each order would receive a timestamp marking its first contact with the market, and no order would begin to be executed until all orders with earlier timestamps had been fully processed. In making a determination on the IEX application, I would urge the commission to consider whether approval would bring the system closer to this ideal. And indeed, to think further about what other changes to the rules governing market microstructure would also achieve the same goal.