Sunday, November 03, 2019

Deadly Force: Then and Now

On Wednesday, October 30 there was an extraordinary conference at the Schomburg Center, marking the 75th anniversary of Gunnar Myrdal's American Dilemma. The conference was conceived and organized by Alondra Nelson and Dan O'Flaherty, and video of the entire event is available in two parts here (click on the landing page to see a menu). A companion digital platform brings to a much wider audience research memoranda written by the many exceptional scholars who worked alongside Myrdal, but who remain largely "hidden figures" to this day.

Speakers at the conference were limited to ten minutes. My own remarks were based on Chapter 8 of my recent book with Dan, which draws on material from the Schomburg archives. The text is reproduced below with a few minor edits and links added (the full session is in the Part 2 recording starting at around 2:50:00):
I’m so immensely grateful to the organizers for the opportunity to speak on this occasion, with this amazing group of panelists.

I’d like to speak mostly about crime and policing, which is the topic of my recent book with Dan, and how this work led us to the archives of the Schomburg Center in search of information on the history of police-community relations, and data on the historical use of deadly force. 

As many other panelists have pointed out, American Dilemma was built on the work of dozens of researchers, who painstakingly assembled vast amounts information. Only part of that knowledge made it into print, much of the rest remains largely hidden from view.

I’ll talk about what Dan and I found in the Schomburg archives in just a few minutes, but let me begin by saying a few words about what we know about police-related homicides today.

One thing we know is that we don’t know much—there’s still no complete and reliable source of official data on the use of deadly force by police in the United States.

As Paul Butler has written in his book Chokehold, the “information about itself that a society collects—and does not collect—is always revealing about the values of that society. We know, as we should, exactly how many police officers are killed in the line of duty. But we do not know, as we should, exactly how many civilians are killed by the police.”

Even James Comey, when he was FBI Director in 2015, described the absence of official statistics on police homicides as embarrassing, ridiculous, and unacceptable.

But over the past few years, unofficial statistics have started to be compiled, some by traditional media organizations like the Guardian and the Washington Post, and others by relatively new online sources like Mapping Police Violence and Fatal Encounters.

These data only go back a few years, but we can already see a few patterns that I’d like to bring to your attention.
First, the scale of police killing in the United States far exceeds that in other comparable countries. According to the Guardian data, police kill about 1,100 civilians a year. In contrast, German police kill about 8 and British police about 2. The US population is about three times as large as these countries combined, but the rate of deadly force is more than a hundred times as great. 
Second, there are significant racial and ethnic disparities in exposure to deadly force. The most highly exposed groups are African Americans and Native Americans, followed by Latinos, and the least exposed are whites and Asians. In the Guardian data for example, African Americans are about two and half times as likely to be victims of lethal force relative to white civilians. But these racial and ethnic disparities vary widely by location: in the five largest cities, the ratio of black to white exposure to lethal force ranges from four in Houston to eighteen in Chicago.
Third, there are staggering differences across states in the incidence of lethal force. The deadliest states have about eight times the rate of lethal force as the safest. Police homicides occur most often in Western states and parts of the South. The eight states with the highest incidence in the Guardian data are New Mexico, Oklahoma, Alaska, Arizona, Wyoming, West Virginia, Colorado, and Nevada. Six of these are in the West, the other two in the South. By contrast, the safest states are in the Northeast.
Fourth, extremely large differences also exist among the largest cities. New York and Los Angeles are both large, diverse, coastal, and liberal cities with strict gun laws, but every demographic group is much safer in New York than in Los Angeles today. White civilians in Los Angeles are almost four times as likely to be killed by police as those in New York. Latinos in Los Angeles are more than eight times as likely to be killed as those in New York. And Houston is even deadlier for white civilians than Los Angeles. In fact, the differences in overall rates is so great that white residents of Houston are more likely to be killed by police than African Americans in New York City.
Fifth, and this came as a surprise to us, many states in the South, including the secessionist states of the former confederacy, have smaller racial disparities in exposure to lethal force than states elsewhere. Many of these Southern states have approximate parity between rates of lethal force faced by black and white civilians in the Guardian data. This is true of Mississippi, Alabama, South Carolina, Georgia, Arkansas, and Tennessee for example.

Bear in mind that these data are very recent and possibly incomplete, so these patterns may not hold up as better data become available. But we can make some tentative comparisons with the 1930s, based on information in the Schomburg archives.

Among the researchers who did the groundwork for American Dilemma was the sociologist Arthur Raper, who surveyed a large number of police departments by mail about police-related homicides in the five years ending in 1940. A total of 228 departments responded. These departments represented about 13 percent of the national population, and about 20 percent of the national black population at the time.

According to Raper’s data, police killed roughly four times as many African Americans as lynch mobs did in the 1930s. In fact, police accounted for more African American deaths than all other white Americans combined. This remains approximately true even today.

Many cities had much higher rates of killing in the 1930s than they do now. Denver, Covington KY, and Jacksonville had rates over fifty per million in the Raper data, and Atlanta, Nashville, Kansas City, and Chattanooga had rates above forty per million. In the Guardian data, only two cities—Miami and Stockton, CA—had rates in this range.

There are fifty-two cities in Raper’s data that had over 50,000 people in 1940. In this group of cities, the rate at which African Americans were killed by police fell from about twenty per million in 1935–1940 to about ten in 2015–2016. So at least in the South, the incidence of lethal force faced by black civilians has declined, although from an extremely high level.

One of the points that Dan and I have explored in our book is that fearsomeness and fearfulness are two sides of the same coin. Murder is the only major crime that can be motivated by pure preemption—people sometimes kill simply to avoid being killed first. This makes fearful people dangerous, and fearsome people afraid. When people can be killed with impunity, these effects are amplified and very high rates of killing can arise in a climate of fear.

In the 1930s fear was rampant—both fear of police and fear by police. Drawing on prior work by H. C. Brearley, Raper observed that between 1920 and 1932, more than half of interracial homicides in which the killer’s identity was known were either slayings of black civilians by white police officers or slayings of white officers by black civilians. Along similar lines, Khalil Muhammad has observed in his pioneering book The Condemnation of Blackness that according to “dozens of letters written by black suspects and convicts to the NAACP in the 1920s, self-defense was one of the most frequently cited causes of interracial homicide of white male citizens and police officers by black men.”

In fact, one of Raper’s most striking findings is the extremely high rate at which officers in the South were killed in the 1930s when compared with today. Among Raper’s respondents, 1.3 police officers were killed per year per million population, while current rates are between 0.1 and 0.2 per year per million. It seems that officers have become much safer from civilians than civilians have become from officers.

Since American Dilemma was largely a study of the South, we don’t have comparable historical data for other parts of the country. What we do know, though, is that variations in the use of lethal force across law enforcement agencies are immense. And these differences persist even when one takes into account such factors as gun prevalence, crime intensity, police-civilian contact, arrest rates, and the degree of danger faced by officers themselves.

It seems that selection, training, leadership, and organizational culture matter a great deal. Put differently, high rates of deadly force arise not from bad apples, but from bad orchards. Certain soils are fertile environments for the growth of practices that result in high rates of killing. We don’t yet have a good understanding of what makes them so. But we do understand that the painstaking work of a team of talented researchers three generations ago, and the efforts to preserve the fruits of their labor right here at the Schomburg Center, will be of enormous help to us as we grapple with these questions.
An additional and very valuable source of historical information on the use of deadly force in the United States is the Kerner Commission report of 1968. The report is discussed at length in the book, and some of the key lessons are described in an article that Dan and I wrote for the Marshall Project a few months ago. An interview with Phillip Adams of Late Night Live on ABC (Australia) and a more recent conversation with Tonya Mosley on NPR's Here and Now also covers some of this ground.

The book is about much more than deadly force though; it deals with how stereotypes condition and contaminate all sorts of interactions related to crime and the justice system, including interactions between victims and offenders, officers and suspects, prosecutors and witnesses, judges and defendants, and so on. If you have an hour to spare, this detailed, probing conversation with Mary-Charlotte Domandi of the Radio Café podcast covers the essentials and broader implications of the argument.

And if you happen to be in New York on November 14 and would like to attend a panel on the book, featuring Valerie Purdie Greenaway, Carla Shedd, and Suresh Naidu, please stop by, details here, no registration required.

Sunday, September 03, 2017

Innovation in Economics Pedagogy and Publishing

Well, it's Labor Day weekend, which means that Barnard and Columbia students are back on campus and classes are about to begin. This semester I'm teaching a seminar based on a book I'm writing with Dan O'Flaherty, and... Introduction to Economic Reasoning.

It's my first time teaching an introductory course in well over a decade and, to be honest, I never thought I'd ever do so willingly again. But this time I volunteered, and am excited to start. It's the culmination of an extraordinary journey that began almost five years ago, when Wendy Carlin of UCL contacted me about joining an initiative that eventually led to the CORE Project. Our first major accomplishment is a new book, The Economy, produced simultaneously for digital and print:


The digital version is available free of charge worldwide, released under a Creative Commons license, while the print version sold by Oxford University Press retails for under fifty dollars in the United States, about a sixth of the price of a standard textbook.

And it's a lot more interesting and fun to read than a standard text. We started from scratch in producing it, incorporating a lot of economic history, data, experiments, and interesting theory – including social preferences, strategic interaction, incomplete information, incomplete contracts, disequilibrium dynamics, and more.

But the content innovation is just part of the story. Above all, it was an incredible process innovation, involving authorship by more than twenty scholars scattered across the world, some making contributions to just a couple of units while others (Wendy Carlin, Sam Bowles, and Margaret Stevens) contributed to just about everything and ensured continuity and coherence. My own substantive contributions were to units 11 and 12, on market dynamics and market breakdowns, and to the profiles of some great economists of the past. But we all chipped in here and there, reading and offering minor suggestions wherever our own particular expertise turned out to be an asset.

And then there's the publishing innovation, which Arthur Attwell describes very nicely here
I’m a book-maker, which, for the most part, means I turn Word documents and Powerpoint slides into books. These days, my team and I also turn them into websites and ebooks. To do it well, we draw on 500 years of book-making craft. And very rarely we get to try to add something to that craft.

The CORE project – specifically, the production of their textbook The Economy – has enabled us to do really exciting, perhaps pioneering, book-making work...

For over fifteen years, book-makers like me have been pulled in two directions: you’re a print person or you’re a digital person. This is largely a practical matter: the skills and tools for each have been completely different. Which meant the workflows for creating each format were completely different, as were their distribution channels... the practical matter of skills has framed the evolution of publishing as ‘print vs digital’, when of course the conversation should be about print and digital. Not just because we’re stuck with a multiformat world whether we like it or not, but because print and digital formats are symbiotic. In ambitious book projects, especially where we want a book to have a social impact, neither can be successful without the other.
Print books generate instant credibility. They carry a sense of permanence and authority that digital formats cannot muster... But print does not scale, and it’s locked into a funding model where the end-user pays for every copy.
Digital formats, and websites in particular, are the opposite. Web publications struggle to muster the authority of a printed book, but they scale instantly and allow for a range of funding models... Books as websites can be public goods in a way that printed books cannot, especially for the poor.
So, when a book needs to make an impact, it simply must be in print and digital formats. It cannot have impact without the authority of print. And it cannot have impact without the scale of the web...

For most book-makers like me, who make print and digital publications, this has meant creating two versions: the print edition and the digital edition. The print edition is usually the master, and the digital version a laborious, post-production conversion.
This is an expensive process, often done by teams of glorified copy-pasters. And since most books need to be corrected and updated after a short time, everything must be done twice, and version control between the formats is error-prone.
Clearly the holy grail for book-production workflows is to produce all formats from one source simultaneously. Many teams have tackled this challenge. Big incumbents like Adobe have tried valiantly to extend their print-production tools to produce ready-to-use digital formats, but their roots in page design are too deep to make this simple or scalable. And, given the nature of the web and the high costs of developing software, digital workflows based on proprietary software don’t spread or become standards.
For print-and-digital book production to grow we need open-source tools that produce high-end, print-ready files and sensible websites. With the CORE project, we are right at the frontier.
And the production quality has to be seen to believed. The book loads almost instantaneously in a browser, and renders beautifully on a mobile device. And it contains features that make the use of supplementary slides unnecessary. Take a look, for instance, at Figure 1.2. You'll notice six slides in the sidebar; click through each of these in turn. You'll see global inequality along three dimensions: within and between countries, and across time. Watch the movement of the entire income distribution in China from 1980 to the present day, as it leapfrogs one set of countries after another. There's no need for ancillary resources, just project the book itself on a screen and talk through it. As Arthur says, we are at the frontier.

Finally, there's the innovation in outreach, in building a community of adopters, and getting graduate students excited about teaching again. Last month we launched CORE-USA, with a workshop involving about twenty graduate students and thirty faculty, funded by a generous grant from the Teagle Foundation. This is the first of several such workshops, one of the primary goals of which is to identify graduate students with strong potential in both teaching and scholarship, and provide them with exposure to the our materials and community. 

These students will be designated CORE-Teagle Fellows, which we hope will provide a strong, positive signal as they enter the academic job market in a year or two. If you're hiring, look out for these pioneers, and if you're a graduate student, consider applying for next year's workshop. And if you'd like to support this initiative, just buy the print version of The Economy. You'll enjoy it, and a small portion of the proceeds will flow to the non-profit that produced it.

There's some serious disruption going on in economics pedagogy and textbook publishing right now, and it's exciting to be in the thick of things.

Tuesday, March 07, 2017

The Teaching of Economics

In 2013, with funding from the Institute for New Economic Thinking, University College London, Friends Provident Foundation, Azim Premji University (Bangalore) and Sciences Po (Paris), a group of concerned economists created CORE, the Curriculum Open-Access Resources for Economics. Wendy Carlin from University College London led the initiative, and I was fortunate enough to have been involved from the outset. The group soon grew to encompass a couple of dozen members from a broad range of countries including France, Chile, Colombia, Turkey, and India.

CORE’s vision is that economics should be an inquiry into the fundamental problems facing humanity today and the ways that economic reasoning can address them, not just a training in abstract problem solving. We sought to directly address the problem of a lack of good teaching resources consistent with this vision, and the attendant issues, including limited incentives for faculty and their teaching teams to make use of what is available. 
   
Since its launch, CORE has successfully begun to produce high-quality resources for the teaching of economics through this global collaboration of scholars, and to distribute these resources free of charge worldwide under a Creative Commons license. Our e-book The Economy, currently in beta, is being taught as the required introductory course at University College London (UCL), the Toulouse School of Economics, Humboldt University (Berlin), and other top economics departments in Europe. It is also being taught at the Lahore University of Management Sciences, Azim Premji University (Bangalore), the University of Sydney, and Universidad de los Andes (Bogota). More than 2,300 verified instructors have been cleared for access to CORE’s full range of supplementary teaching materials, and over thirty thousand students spread across 78 different countries have registered for access to the e-book.

As part of its strategy to improve the teaching of economics CORE is now seeking to expand its outreach and impact in the United States. It will do so in collaboration with Barnard College, which has just received a major award from the Teagle Foundation for exactly this purpose. My department colleagues Homa Zarghamee and Belinda Archibong will join me in directing this effort. 

At the heart of the initiative is a series of workshops involving faculty and graduate students, who will be selected through a competitive application process and provided with stipends and partial reimbursement of travel costs. These workshops will be designed to bring together instructors who already have experience with implementing CORE, and a larger group of potential adopters. The first workshop will be held at Barnard on August 17-19, 2017. We will post a call for applications soon, and are currently in the process of hiring a project manager.

Among our goals is the creation of a cadre of confident, networked, new PhDs excited about making teaching a fulfilling and central part of their career in economics. Graduate students who complete a workshop will be certified as CORE-Teagle Fellows, a designation that we hope will become a credible signal of commitment to quality teaching among employers, especially liberal arts colleges and public policy schools. We would also like this to be a signal of scholarship potential, and will accordingly screen applicants for exceptional promise in both research and teaching.

Over the longer term, we also want to identify a set of institutional partners with shared goals for the improvement of economics education and a commitment to the development and use of high quality, open access instructional content. To further these goals, we will launch the CORE Consortium, a membership program for institutions willing to enter into a long-term, multi-year commitment to support faculty and graduate students using CORE, and host workshops on a rotating basis. By the end of the 36-month period covered by the Teagle grant, we hope to have at least half a dozen institutions on board as members, as well as a leadership team and an administrative structure.

We are enormously grateful to the Teagle Foundation for funding this exciting new initiative. Further updates will follow once a project manager is in place.

Wednesday, March 01, 2017

Reigns of Error

The death of Kenneth Arrow has led lots of people to swap stories about their interactions with him. Larry Blume has posted several of these on facebook, including the following response to my own contribution (quoted with permission):
This story is not at all surprising; Ken read everything. I think I mentioned elsewhere that my last conversation with Ken, this past June, concerned The Theory of Moral Sentiments. He and Amartya Sen were taking turns quoting from it, from memory... I could recognize the quotes, but not respond in kind. Once in a conversation about Nash equilibrium and rational expectations, Ken wondered if I had read Merton on expectations - not Robert Jr.: https://www.jstor.org/stable/4609267. He also had a good stock of Shakespeare to call on.
The link is to a 1948 paper by the great sociologist Robert K. Merton (father of the Nobel-winning economist). Reading anything at all by Merton is an excellent use of one's time, so I went through this paper. It's extraordinary. Not only does Merton provide a very clear account of equilibrium beliefs, but goes on to point out that even when these beliefs are correct in a narrow sense, they can hold in place an incorrect understanding of the social world. To translate this into the contemporary language of economics, Merton points out that the play of equilibrium strategies can go hand in hand with a deeply erroneous understanding of the game.

Merton begins with an account of a Depression-era bank run that perfectly captures the multiple equilibrium logic he has in mind:
It is the year 1932. The Last National Bank is a flourishing institution. A large part of its resources is liquid without being watered. Cartwright Millingville has ample reason to be proud of the banking institution over which he presides. Until Black Wednesday. As he enters his bank, he notices that business is unusually brisk. A little odd, that, since the men at the A.M.O.K. steel plant and the K.O.M.A. mattress factory are not usually paid until Saturday. Yet here are two dozen men, obviously from the factories, queued up in front of the tellers' cages. As he turns into his private office, the president muses rather compassionately: "Hope they haven't been laid off in midweek. They should be in the shop at this hour."
But speculations of this sort have never made for a thriving bank, and Millingville turns to the pile of documents upon his desk. His precise signature is affixed to fewer than a score of papers when he is disturbed by the absence of something familiar and the intrusion of something alien. The low discreet hum of bank business has given way to a strange and annoying stridency of many voices. A situation has been defined as real. And that is the beginning of what ends as Black Wednesday -- the last Wednesday, it might be noted, of the Last National Bank.
You can see why Arrow saw in this a precursor to the concept of Nash equilibrium, the existence of which would be established just two years later. There are also echoes here of the Diamond and Dybvig model of bank runs, in which the multiple equilibrium nature of the problem finds formal expression.

But Merton doesn't stop there, he considers how the people expressing the described behavior interpret the situation they are in. And here he observes an important disparity between the manner in which the situation is viewed by the the participants themselves, as compared with its interpretation from the analytical viewpoint of the social scientist:
The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true. The specious validity of the self-fulfilling prophecy perpetuates a reign of error. For the prophet will cite the actual course of events as proof that he was right from the very beginning. (Yet we know that Millingville's bank was solvent, that it would have survived for many years had not the misleading rumor created the very conditions of its own fulfillment.) Such are the perversities of social logic.
So beliefs are correct in one sense, but at sharp variance with reality in another. Such "reigns of error" are not something we economists pay much attention to, with one very notable exception. 

In his book The Anatomy of Racial Inequality Glenn Loury discusses the manner in which negative stereotypes about a group can become self-fulfilling through the incentive effects that the stereotypes themselves create. This is the phenomenon of statistical discrimination, introduced into the economics literature by none other than Kenneth Arrow. Like Merton, however, Loury is not content to simply identify the kinds of behaviors consistent with equilibrium beliefs. He wants to know how people with these beliefs will interpret the behaviors. And here he deploys the idea of biased social cognition, which can give rise to essentialist causal misattributions.

That is, behavior arising in equilibrium through the operation of incentives can be interpreted by casual observers as being a consequence of deep differences in character. And this has enormous consequences, since biased social cognitions can "cause some situations to appear anomalous, disquieting, contrary to expectation, worthy of further investigation, inconsistent with the natural order of things---while other situations appear normal, about right, in keeping with what one might expect, consistent with the social world as we know it."

Loury has argued elsewhere that the level of mass incarceration currently prevailing in the United States could not possibly be sustained were it not for its racial character. As long as essentialist interpretations of incentive-driven actions continue to be widespread, such high levels of confinement will not be seen as anomalous or disquieting, and will not give rise to urgent calls for action.

The economic method, for all its flaws, has one very important virtue: it shines a bright light on interests and incentives, and in doing so can challenge essentialist interpretations of social reality. But if this potential is to be realized, it is important to focus not just on the characterization of equilibrium behavior, but also on the reigns of error that distort our mental models of the underlying game.

Saturday, February 25, 2017

Arrow, Edgeworth, and Millicent Garrett Fawcett

There's not much one can say about Kenneth Arrow that hasn't already been said, but there's one personal story that I can add to all the tributes and remembrances. 

I met Arrow just once, at a Stanford conference in April 2008 that he and Matt Jackson jointly organized. While everyone else was seated around the outside of a large ring of tables, Arrow was on the inside, directly in front of the speaker. He was 86 at the time.

I was first up, presenting an early version of a paper with Sam Bowles and Glenn Loury on group inequality. Arrow interrupted me within the first couple of minutes – not aggressively at all, just seeking clarification about the information structure. Then, during a coffee break after the talk, he asked if I’d read a piece by Millicent Fawcett on gender wage inequality, published in the Economic Journal in 1892. That’s not a typo – he really meant 1892. I confessed that I hadn't.

Arrow said that Fawcett’s work was extensively discussed in a 1922 presidential address by Francis Edgeworth, but while many were familiar with the Edgeworth lecture, few had bothered to read Fawcett herself. 

It’s true. Edgeworth mentioned “Mrs. Fawcett” seven times in his address, and cited three separate pieces by her. His lecture was on “Equal Pay to Men and Women for Equal Work,” and one of papers he referenced was “Equal Pay for Equal Work,” published by Fawcett in 1918. Here’s how the latter begins:



I didn’t realize it at the time, but Dame Millicent Garrett Fawcett was every bit as remarkable as Edgeworth and Arrow, and economics was the least of her accomplishments. I imagine that Arrow saw in her a kindred spirit.

Wednesday, December 14, 2016

Thomas Schelling, Methodological Subversive

Thomas Schelling died at the age of 95 yesterday.

At a time when economic theory was becoming virtually synonymous with applied mathematics, he managed to generate deep insights into a broad range of phenomena using only close observation, precise reasoning, and simple models that were easily described but had complex and surprising properties.

This much, I think, is widely appreciated. But what also characterized his work was a lack of concern with professional methodological norms. This allowed him to generate new knowledge with great freedom, and to make innovations in method that may end up being even more significant than his specific insights into economic and social life. 

Consider, for instance, his famous "checkerboard" model of self-forming neighborhoods, first introduced in a memorandum in 1969, with versions published in a 1971 article and in his 1978 book Micromotives and Macrobehavior. This model is simple enough to be described verbally in a couple of paragraphs, but has properties that are extremely difficult to deduce analytically. It is also among the very earliest agent-based computational models, reveals some limitations of the equilibrium approach in economic theory, and continues to guide empirical research on residential segregation.

Here's the model. There is a set of individuals partitioned into two groups; let's call them pennies and dimes. Each individual occupies a square on a checkerboard, and has preferences over the group composition of its neighborhood. The neighborhood here is composed of the (at most) eight adjacent squares. Each person is content to be in a minority in their neighborhood, as long as minority status is not too extreme. Specifically, each wants strictly more than one-third of their neighbors to belong to their own group. 

Initially suppose that there are 60 individuals, arrayed in a perfectly integrated pattern on the board, with the four corners unoccupied. Then each individual in a central location has exactly half their neighbors belonging to their own group, and is therefore satisfied. Those on the edges are in a slightly different situation, but even here each individual has a neighborhood in which at least two-fifths of residents are of their own type. So they too are satisfied.

Now suppose that we remove twenty individuals at random, and replace five of these, placing them in unoccupied locations, also at random. This perturbation will leave some individuals dissatisfied. Now choose any one of these unhappy folks, and move them to a location at which they would be content. Notice that this affects two types of other individuals: those who were previously neighbors of the party that moved, and those who now become neighbors. Some will be unaffected by the move, others may become happy as a result, and still others may become unhappy. 

As long as there are any unhappy people on the board, repeat the process just described: pick one at random, and move them to a spot where they are content. What does the board look like when nobody wants to move?

Schelling found that no matter how often this experiment was repeated, the result was a highly segregated residential pattern. Even though perfect integration is clearly a potential terminal state of the dynamic process just described, it appeared to be unreachable once the system had been perturbed. The assumed preferences are tolerant enough to be consistent with integration, but decentralized, uncoordinated choices by individuals appear to make integration fragile, and segregation extremely stable. Here's how Schelling summarized the insight:
People who have to choose between polarized extremes... will often choose in a way that reinforces the polarization. Doing so is no evidence that they prefer segregation, only that, if segregation exists and they have to choose between exclusive association, people elect like rather than unlike environments.
One can tune the parameters of the model: the population size and density, or the preferences over neighborhood composition, and see that this key insight is robust. And for reasons discussed in this essay, equilibrium reasoning alone cannot be used to uncover it. 

A very different kind of contribution, but also one with important methodological implications, may be found in Schelling's 1960 classic The Strategy of Conflict. Here he considers the adaptive value of pretending to be irrational, in order to make threats or promises credible (emphasis added):
How can one commit himself in advance to an act that he would in fact prefer not to carry out in the event, in order that his commitment may deter the other party? One can of course bluff, to persuade the other falsely that the costs or damages to the threatener would be minor or negative. More interesting, the one making the threat may pretend that he himself erroneously believes his own costs to be small, and therefore would mistakenly go ahead and fulfill the threat. Or perhaps he can pretend a revenge motivation so strong as to overcome the prospect of self-damage; but this option is probably most readily available to the truly revengeful
Similarly, in bargaining situations, "the sophisticated negotiator may find it difficult to seem as obstinate as a truly obstinate man." And when faced with a threat, it may be profitable to be known to possess "genuine ignorance, obstinacy or simple disbelief, since it may be more convincing to the prospective threatener."

Starting with three classic papers in the same 1982 issue of the Journal of Economic Theory, a large literature in economics has dealt with the implications for rational behavior of interacting with parties who, with small likelihood, may not be rational. While this work has focused on characterizing rational responses to irrationality, Schelling's point speaks also to payoffs, and raises the possibility that departures from rationality may have adaptive value

The methodological implications of this are profound, because the idea calls into question the normal justification for assuming that economic agents are in fact fully rational. Jack Hirshleifer explored the implications of this in a wonderful paper on the adaptive value of emotions, and Robert Frank wrote an entire book about the topic. But the idea is right there, hidden in plain sight, in Schelling's parenthetical comments.  

Finally, consider Schelling's burglar paradox, also described in The Strategy of Conflict:
If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.
Sandeep Baliga and Tomas Sjöström have shown exactly how such reciprocal fear can lead to a fatal unraveling, and explored the enormous consequences of allowing for pre-play communication in the form of cheap talk. And I have previously discussed the importance of this reasoning in accounting for variations in homicide rates across time and space, as well as the effects of Stand-your-Ground laws.

There are a handful of social scientists whose impact on my own work is so profound that I can't imagine what I'd be writing if I hadn't come across their work. Among them are Glenn Loury, Elinor Ostrom, and Thomas Schelling. I can think of at least five papers: on segregation, on variations in homicide across regions and communities, on reputation in bargaining, and on social norms, that flow directly from Schelling's thought. 

It may surprise some to know that Glenn Loury's Du Bois lectures are dedicated to Schelling, but it makes perfect sense to me. Here's how Glenn explains his choice in the preface:
Shortly after arriving at Harvard in 1982 as a newly appointed Professor of Economics and of Afro-American Studies, I begin to despair of the possibility that I could successfully integrate my love of economic science with my passion for thinking broadly and writing usefully about the issue of race in contemporary America. How, I wondered, could one do rigorous theoretical work in economics while remaining relevant to an issue that seems so fraught with political, cultural and psychological dimensions? Tom Schelling not only convinced me that this was possible; he took me by the hand and showed the way. The intellectual style reflected in this book developed under his tutelage. My first insights into the problem of "racial classification" emerged in lecture halls at Harvard's Kennedy School of Government, where, for several years in the 1980s, Tom and I co-taught a course we called "Public Policies in Divided Societies." Tom Schelling's creative and playful mind, his incredible breadth of interests, and his unparalleled mastery of strategic analysis opened up a new world of intellectual possibilities for me. I will always be grateful to him.
As, indeed, will I.

Wednesday, November 02, 2016

The Prediction Market Paradox

There’s a reason why campaigns are eager to publicize polls that show them ahead, while downplaying those in which they happen to be trailing. The perception that a candidate is losing can depress donations and volunteer effort, and lower morale and turnout among supporters. Hence polls that show tightening of a race are often advertised as indicators of momentum by the trailing party, and as outliers by the leader. The actual likelihood of victory is not independent of beliefs about this likelihood.

This gives rise to what might be called a prediction market paradox. If prices are widely believed to accurately reflect underlying probabilities, then there is an incentive for deep-pocketed partisans to try and manipulate these prices at the margin. But if the possibility of manipulation is salient and prices are treated with skepticism, then incentives to manipulate are weakened and prices will in fact be quite accurate reflections of underlying beliefs.

An interesting illustration of this phenomenon is  the recent decision by PredictIt to post an electoral college map, updated by the minute, that aggregates probabilities derived from all its state level markets. Here's what the map looks like at the moment:


There are seven categories: the safe, likely, and leaning states for each candidate and one toss-up category. States shift across categories as prediction market prices cross the relevant thresholds. This way, a broad range of probability assessments is mapped onto a much coarser set that is easy to visualize and process.

But this creates the possibility that small changes in price, of the order of one cent, can lead to reassignments across categories that generate a very different picture. The incentives to manipulate prices is amplified whenever such categorical switches are feasible.

Of course these incentives apply to both sides of the market, with some traders wishing to shift states to the left while others are pushing to the right. As a result, an unusually large number of states may be expected to bounce back and forth across boundaries, and to remain within a narrow band of prices close to those selected (somewhat arbitrarily) by the exchange as thresholds.

This seems to be what we are seeing. The boundary between the lean and likely Clinton states is determined by a 75% threshold, and we see four states (Wisconsin, Michigan, Colorado, and Pennsylvania) all within a point or two of this. Here are those above the threshold:


And those below:


New Hampshire is not far from the boundary either. 

All this could be just coincidence, but if one looks at probabilistic forecasts from other sources, there is no such pattern. The New York Times conveniently collects six probabilistic forecasts including it's own, with the current picture looking like this:


These forecasts (from the Times, FiveThirtyEight, Huffington Post, Predictwise, Princeton Election Consortium and Daily Kos respectively) don't appear to be clustered around the PredictIt thresholds at all.

Still, the evidence is anecdotal at best, and a proper analysis would have to look for a discontinuity in prices around the time that the map was created, with a clustering of prices around boundary points that could not be accounted for by random chance alone. 

Meanwhile, some caution is probably warranted in interpreting prediction market data. This is a case in which the ease of visualization, aggregation and dissemination of data can have an impact on the underlying measurements themselves, and indeed on the objective probabilities that the measures are intended to reflect.