I recently read Paul Ormerod’s Why Most Things Fail. Focusing on the frequent failure of companies and government policies, Ormerod argues that the environment in which these entities exist is so complex and unpredictable that even the best laid plans cannot reflect what will really happen. Between this complexity and the competition of many players’ laying plans, the ones that succeed often get there by luck. And once successful, the only viable strategy for long-term survival is constant adaptation via trial and error.
For a review of the book, I’ll just agree with this Financial Times review’s mixed bag of praise and criticism. However, I will highlight one of the book’s better examples.
Economist Thomas Schelling created a model of how racial segregation happens. First, he posited a large grid, like a chessboard but much larger. Each square either has a red person’s house, a green person’s house, or nothing. A person will move if a certain number of his immediate surrounding neighbors are a different color—the number is fixed across all people, representing a societal level of tolerance for different-race neighbors.
It’s a complex system because a single person’s move can have cascading effects on the former and new neighbors, which in turn can have cascading effects. Thus, the results are not obvious from the initial conditions.
Northwestern University has a Schelling-inspired segregation simulator, where I generated this before-and-after combination.
Those familiar with cellular automata (CA) may have already expected that the few rules would give rise to order from randomness, but the nature of the order is surprising. For my model, I assumed that each person wanted at least 40% of neighbors to be the same race, yet the system ended up 85% segregated. Due to the system’s interconnections, the relatively weak individual preferences, upon interaction, led to a strongly segregated society, a result typical of Schelling segregation models.
This type of result—where a societal outcome emerges nonlinearly from complex interactions—is what Ormerod sees everywhere, albiet in yet more complex form than this model.
And finally, if you’re thinking that the Schelling model has its own kind of predictability, it does at an aggregate level. Ormerod does not give this point enough weight with regard to Schelling’s work, but elsewhere in the book he describes how company failures are predictable in the aggregate by a power law distribution similar to that of biological species extinctions. However, these high-level patterns won’t tell us when particular companies will fail or, in a real-world city, which people will move exactly where.
No comments:
Post a Comment