Sunday, December 17, 2006

Private Prediction Markets for Companies

I’ve been a fan of prediction markets since I saw Idea Futures (now Foresight Exchange) on the Web in something like 1995. Although Idea Futures and other early Web prediction markets were public, the use of private prediction markets within companies has been gaining momentum.

To understand what a prediction market is and why it can be valuable to a company, look at this:

When Todd Proebsting, director of Microsoft’s Center for Software Excellence, tested a prediction market internally, managers quickly gave it their blessing.

The goal: to have 25 members of a development team predict when a Microsoft product would ship (this was an internal product, not one sold externally). The prediction market was set up in August 2004, and the product that “had been in the works for a long time” was scheduled to ship in November 2004. Each “trader” received $50 in their account to start with, and was told that the more accurate their prediction, the more money they would make. The market opened with an initial price of on-time delivery set to 16 2/3 cents.

“The price of ‘before November’ dropped to zero right away,” Proebsting said. “The price of ‘on time’ in about two to three minutes dropped to 2.3 cents on the dollar.” Translated, that’s more than 30-to-1 odds against on-time delivery.

Then the woman who was responsible for scheduling started trying to convince her colleagues who were buying and selling future delivery dates. “She was able to talk (on-time delivery) up to around 3 cents,” Proebsting said. “People really enjoyed moving the price...They loved this.”

“The next day the director comes into my office and said, ‘What have you done?‘” Proebsting said. But further investigation showed that the product actually was behind schedule, even though nobody was telling management, and it eventually shipped in February.

Enough said.

(The excerpt is from Declan McCullagh’s coverage of a recent “micro-conference” on the subject, hosted by Yahoo.)

Sunday, December 10, 2006

Tragedy and the Kindness of Strangers

If you are in the United States, you may have heard about James Kim earlier this week. On a family roadtrip, he took a wrong turn onto an isolated mountain road in Oregon.

It was an ordinary mistake. It became tragic when the family’s car got stuck amid heavy snow. His wife, Kati, and their two children were rescued nine days later near the car, which they had used for shelter.

However, on day seven, with supplies and hope dwindling, James had set out on foot for help. Before succumbing to the elements, he covered more than ten miles of snow-covered mountain wilderness, with little food or protection, searching for the searchers.

NPR’s Scott Simon eloquently captured what a lot of people felt:

So much of modern popular culture depicts parents who are goofy, foolish, clueless and slightly pathetic. [Yet] almost every parent is certain they would risk their life for those they love; James Kim actually made that sacrifice.

In the days before Kati and the children were rescued, the search for the Kims generated a groundswell of media attention, first local then national. It was a primal human drama, magnified by the involvement of the children, four-year-old Penelope and seven-month-old Sabine.

They all could have died. Among the reasons Kati, Penelope, and Sabine were rescued was a primal response from far-flung strangers, people with no reason to be involved other than an instinct to help: the phone company engineer who on his own time combed through cell-phone network data to narrow-down the area for rescuers to search; the amateur helicopter pilot, unrelated to the official search effort, who spotted Kati from the air, who “went up because he had a hunch, and because a newspaper picture of the girls reminded him of his own grandkids.” (San Francisco Chronicle)

As for the official search effort, the San Jose Mercury News tell us that 95% of search teams are volunteers, people ready to take a middle-of-the-night call to wherever, for whomever. They did so for days on end.

And finally, for those people far from the scene, whose only connection to the story was the story, there were kind words—of support, prayer, and later, condolence. James’ employer, CNET, received thousands of such emails and postings.

In the aftermath, various anonymous people left flowers at the front of CNET. A baker from South San Francisco dropped off a bunch of pastries that he made, because that’s what he could do.

So while one man fought for his family’s survival, thousands of people reached out to help. Many were friends, colleagues, and relatives of the Kims. Many more were strangers.

Monday, December 4, 2006

Bassmaster and the American Dream

It started with an aside, a few sentences in a New York Times Book Review piece. The subject was bass, as in fish. The reviewer was listing colorful characters in the world of competitive bass fishing, including:

...Takahiro Omori, winner of the 2004 Bassmaster Classic, who came to America from Japan in 1992. (There seems to be a Japanese craze for American bass fishing.) Omori arrives here virtually penniless and without any English, and lives out of a 1965 Chevy Suburban for three years while trying to break into the pro angling circuit. When he finally has some pro success, he buys a house in Lake Fork, Tex., where he installs a swimming pool, not for swimming but for testing lures.

As American Dream stories go, this one had me hook, line, and sinker.

The inevitable Google search revealed Omori’s Angler Profile in ESPN’s Angler/Tournament Database. His logo-clad outfit testified to competitive bass fishing’s oft-quoted goal of becoming “the next NASCAR.” Copious statistics detailed Omori’s performance in the real world, as well as in Bass Fantasy Fishing. How far he had come.

A 2004 feature article in The Dallas Observer recounted Omori’s journey, from nine-year-old pond fisher to 18-year-old Japanese pro to competitive bass fisherman in America. From his start in the United States at age 21, it took twelve years—of initial failure, then slow but methodical progress, and finally Bassmaster Classic victory.

On the Omori work ethic:

He’d finish one tournament, and even if the next was three weeks out, he’d drive there and pre-fish until it started. At night, Omori would find other pre-fishers and ask them to dinner, whereupon he’d talk fishing. Or, if there weren’t other pre-fishers to dine with—because, really, who wants to pre-fish for three weeks?—Omori would head to his van and read a bass magazine. His trailer on Lake Fork became a library of Field & Streams and Bassmaster videos, stacked to the ceiling. Boxes of fishing tackle were everywhere. Eventually, Omori had to clear walking paths so he could get from his bed to the trailer’s door without stepping on a Rick Clunn tape or a stray crank bait.

Most bass guys had families or at least dated, but how did you date when you put 40,000 miles a year on your beat-up van and were home only to pack up for the next trip? Most guys tinkered with their lures to make them fly better or land softer, but how many stayed up half the night making lure modifications for scenarios, for the moment when you’re fishing in Kentucky, near a bank’s edge, in the early morning, and it’s sunny out, and the water’s 5 feet deep? How many did that? How many had more than 110 tackle boxes with lures inside that carried a labeling, a reminder, of said fishing scenarios?

Omori’s singular pursuit of a U.S. bass-fishing career left him estranged from his father, who only saw dishonor in the enterprise. But by 2001, when Omori started to win enough money to buy a house (which, of course, was on a famous bass-fishing lake), father’s attitude was thawing, to the point that father and family came to the United States to see Omori compete.

But two weeks later, father was dead, sending Omori into 16 months of grief and contemplation. He came out the other side a renewed man on a mission, regaining form in 2003 and aiming for the ultimate goal: a win in the Bassmaster Classic, “the Super Bowl of bass fishing.”

In preparation for the 2004 season, Omori installed the swimming pool mentioned in The New York Times Book Review quote. The Dallas Observer piece adds the following color:

Before it was filled, Omori painted a 1-inch-wide line down the center of the pool. As he prepared for the bass season, he’d grab a fishing rod and one tackle box from the walk-in closet filled with tackle boxes (but not clothes), sit in a chair a full cast from the pool, smell the chlorine and try landing his different lures onto the 1-inch strip, making adjustments if the lure didn’t land right, making adjustments if it hit the strip but then drifted away with a current. He’d do that for hours.

Omori went on to win the 2004 Bassmaster Classic. For a dramatic retelling, see the Dallas Observer article, near the end.

It was August 1, 2004, twelve years after Omori arrived in the United States with hardly any money or English vocabulary. Omori called it the greatest day of his life.

It’s the greatest story—fish or otherwise—that I’ve heard in a long time.

Thursday, November 30, 2006

Stock-Price Milestones

Most milestones have an arbitrary quality, relying on the roundness of a number to make 500 seem more meaningful than, say, 493. But stock-price milestones have an extra layer of meaninglessness. If you know why, you can stop reading. However, the huge amount of media coverage for Google’s stock price breaking $500 last week suggests to me that some people might want to read on.

Here’s the problem: A company’s stock price is not comparable to other companies’ stock prices, nor is it necessarily comparable to itself over time. This is because the stock price represents the market value of the company divided by the number of outstanding shares. So changing the number of outstanding shares can change the price without changing the value of the company.

An example: Microsoft’s share price is currently about 6% of Google’s share price, yet Microsoft’s market value (share price x shares outstanding) is roughly twice that of Google’s. The difference is, Microsoft has a lot more shares outstanding.

In fact, since going public in 1986, Microsoft has split its stock nine times. A split is when a company issues multiple new shares per existing share, often 2 for 1 but sometimes with other ratios.

At this point, a single share of Microsoft IPO stock—that is, before any of the splits—would equal 288 shares of current Microsoft stock. As of today’s closing price ($29.36), that original share would now be worth $8,455.70. But something tells me that the press is not readying articles about Microsoft’s breaking the $8,500 barrier.

By contrast, Google has never split its stock. Combine that with Google’s fantastic run of financial performance, and you get $500 per share, a number rarely seen with tech companies. That sounds like news until you realize the number is rarely seen because other high fliers have chosen to split their stocks well before reaching $500.

And we haven’t yet mentioned the reverse split, where a company reduces the number of outstanding shares, thus raising the price. Some of the dot-com-era survivors did reverse splits to get their stocks up to respectable-sounding prices. Reverse-split at a high enough ratio, and you’ve got a $500-per-share stock.

You get the idea. An actual milestone measures the distance of a mile. With a stock-price milestone, your mileage may vary.

Sunday, November 19, 2006

In Praise of the Apple Corer

One of my favorite examples of good design is the apple corer. As a simple, efficient solution to a problem, it has few peers in the kitchen.

This apple corer slices an apple into eight sections while separating out the core. Just put it over the top of an apple and push down.

But wait, you say, isn’t a knife an even simpler tool: a single blade and a handle? Indeed a knife is a simpler and more versatile tool. But for sectioning an apple, it’s an inferior solution.

For a knife to achieve similar results to the apple corer, you’ll be making several slices into a wobbly apple. If you’re reasonably fast and accurate, you might get it done in 15 seconds and still have your fingers intact. Meanwhile, I will have been done in roughly 1 second, without concern for slicing flesh instead of apple.

So if you are a regular apple eater, it’s worth using the right tool for the job. When you do, savor the difference.

Saturday, November 11, 2006

Election Night 2006 Graphics

On election night 2006, all the U.S. news organizations were working the same story: Can the Democrats take the House of Representatives and Senate?

Each organization had more or less the same data. But how they showed the data was different.

I took the following screenshots around 8:15pm PDT on November 7th. All were from new organizations’ home pages or otherwise accompanied headline-level information—that is, these graphics were meant to convey the essence of the story; they were not “drill downs” of detailed data.

The Washington Post

The story is about a relatively small numbers of seats potentially changing hands, so the reference to net gains is good. The graphic is conceptually clever, but I don’t know how many people will understand it at a glance. The white space in the middle of each bar represents undecided seats; thus, the pink versus blue bars are racing to be first across the centerline.


Let’s start with the lower graphic, for the House: I get it, although the scale is odd. The bars are configured as if both sides are racing to 435. Yet as noted at the bottom of the graphic, 218 is the meaningful number. This awkwardness shows the wisdom of the Washington Post’s “race to the centerline” approach, which more clearly reflects that a seat gained for one side is a seat lost for the other.

As for CNN’s upper graphic, for the Senate: Why does each bar have two shades? It’s not only inconsistent with the House graphic but it does not appear to be explained.

Fox News

I had to shrink this one to fit because it ran most of the way across Fox News’ home page.

On first glance, I’d assume that the distribution of red versus blue represents the relative percentage of seats held. However, the amount of red and blue is the same for both House and Senate, even though the numbers are different (Democrats well ahead in the House, Republicans slightly ahead in the Senate).

So apparently the red and blue don’t move with the numbers, and thus do nothing beyond ornamentation.


Perhaps in response to Fox’s ornamental graphics, MSN went for an old-school table o’ numbers. MSN’s Microsoft heritage is evident in the design, which appears to be inspired by a PowerPoint 97 template.

The weird thing about the table is how it feels like it should add up to 100, but the “6 undecided” are not in there. Also, do I really care about “Seats not at stake?” I’m forced to care, because it’s the only way to understand the rest of the table, which is a problem. Just tell me what’s changing and how it affects the balance of power.


Now this is an interesting attempt. You need to perceive that the gauge’s pointer can go left or right, and that leftward is Democrat, and rightward is Republican. If you get that, it’s a good quick-glance view, assuming your eyes are sharp enough to read the gray-on-white numbers.

Nitpicking: Why does the Senate “decided” column have white in it and the House “decided” column does not? And what about independent candidates? For the Senate, where the balance of power turned on only a few seats, two independents won. Seems like that should be part of the visual story.

New York Times

Using a map as a visual metaphor is often a good idea, but not when you distort the map to the point where its lack of fidelity is a distraction. In addition, six color codes is probably too many.

In the Times’ defense, this graphic was doing double duty as a user interface. You could click a square to get more detail on that district. Thus, each square arguably needed to be a minimum size for clickability. Or, counter-arguably, if the above graphic was the result of each square needing to be a minimum size, then they needed to do something different in the first place.

ABC News

This is my favorite. It’s not about 100 Senate seats; it’s just about the change in balance of power. It tells us the magnitude and direction of change, and it provides the context for how many seats are necessary for the Democrats to take control.

And that’s all it does. Works for me.

Round-Up and Wrap-Up

If you scroll back up through the various graphics, I think you’ll find that other than ABC’s (and, to a lesser extent MSNBC’s and the Washington Post’s), they made the story more complex than necessary. They each did one or more of the following:

  • They gave nonessential numbers (for example, MSN’s “Seats not at stake”)
  • The numbers they gave were anchored in total seat counts when the real story was the change in a small number of seats (for example, CNN’s race to 435)
  • They used graphics that confused more than enlightened (for example, Fox’s unchanging red versus blue, the New York Times’ abstract map)

All this goes to say, it’s not easy to create these graphics, especially in the TV news field, where more information on the screen is often mistaken for better information.

Congratulations to those that managed to keep the numbers, as they say in Washington, DC, “on message.”

Saturday, November 4, 2006

One Person, One Vote, Many Voting Systems

In an election, winners and losers are sometimes determined as much by the voting system as the voters. For example, the United States’ Electoral College allows a candidate to win the U.S. presidency without winning the popular vote, as happened in 1824, 1876, 1888, and 2000.

In San Francisco, we have a special voting system, Ranked Choice Voting, for certain local elections. Instead of voting for a single candidate, voters rank their choices.

Given a population of voter preferences, Ranked Choice Voting not only can lead to different results from traditional voting but it can also have different results among the various Ranked Choice Voting implementations.

The implementation of Ranked Choice Voting that San Francisco uses, Instant Runoff Voting (IRV), works like this:

  • You rank multiple candidates for an office, indicating your first choice, second choice, and so on.
  • If no candidate attains a majority of first-choice votes, the candidate with the fewest first-choice votes is eliminated.
  • Those who voted for the eliminated candidate have their second-choice votes added to the remaining candidates’ totals.
  • If that reallocation does not create a majority for one candidate, the process continues until a majority is reached.

The process is called Instant Runoff Voting because it resembles a series of run-offs. Whereas traditional run-offs happen over time, IRV gets all the necessary information up front, allowing all elimination stages to occur immediately.

Wikipedia’s entry on the subject gives an interesting example of how the same voter preferences can have different results depending on the voting system. (I’ve added some definitions, in brackets, to the Wikipedia text.)

Imagine an election in which there are three candidates: Andrew, Brian and Catherine. There are 100 voters and they vote as follows...

#39 voters12 voters7 voters42 voters

In a plurality election [where the winner is the candidate with the most first-choice votes], Catherine would be elected.

In a [standard] runoff election, the voters would choose in a second round between Catherine and Andrew.

In [a San Francisco-like Ranked Choice Voting] election Andrew will be elected.

Under Condorcet’s method [each ballot’s rankings are converted into pairwise preferences, such as A beats B but C beats A, which are then tallied across all ballots] or the Borda count [each candidate gets points in proportion to his/her rank on a ballot, such as first-choicers get 5 points and fifth-choicers gets 1 point] Brian would win.

Don’t worry about processing the details. Let’s cut to the implications (also quoted from the Wikipedia article):

[Instant Runoff Voting] may be less likely to elect centrist candidates than some other preferential systems, such as Condorcet’s method and the Borda count. For this reason it can be considered a less consensual system than these alternatives. Some IRV supporters consider this a strength, because an off-center candidate, with the enthusiastic support of many voters, may be preferable to a consensus candidate and that this candidate still must be accepted by a majority of voters.

IRV produces different results to Condorcet and the Borda count because it does not consider the lower preferences of all voters, only of those whose higher choices have been eliminated, and because of its system of sequential exclusions. IRV’s process of excluding candidates one at a time can lead to the elimination, early in the count, of a candidate who, if they had remained in the count longer, would have received enough transfers to be elected.

You get the idea: same voter preferences, different results.

And for a final twist, does the scenario with Brian, Andrew, and Catherine work in the real world? It assumes that the voter preferences and the voting systems are independent—or, put another way, that different voting systems would elicit the same preferences.

But in a real-world election, candidates know which voting system will be used, and they target their campaign spending to shape the preferences of specific segments of voters. Depending on the voting system, it could make sense to target different voters, thus leading to potentially different preferences.

The takeaway: A lot of potential complexity lurks behind “one person, one vote.”

Sunday, October 29, 2006

loudQUIETloud: A film about the Pixies

loudQUIETloud is a documentary about the Pixies’ reunion tour of 2004. Why do you care?

Maybe you’ve heard of the Pixies and know that they’re somehow important. The nutshell: Late 1980s band more famous after the fact than at the time. Big influence on other bands. (“I was basically trying to rip off the Pixies” —Kurt Cobain on the genesis of “Smells Like Teen Spirit.”)

Maybe you need an antidote to the big-time rockumentary, with its inevitable dramatic arc of obscurity to fame to excess to fall-out to redemption. No big dramatic arc here, more of a constant dilemma: Bandmates don’t get along well, but they need each other to be the Pixies—for the money but also for the meaning, since everything else they do individually is in the Pixies’ shadow.

Maybe you want to relate to rock stars. Now middle-aged and not looking too rock-star-ish, the Pixies are ordinary people with everyday problems. But for a couple hours a night, they become musical superheroes.

Maybe you like the underdog. Bassist Kim Deal’s mother approves of the reunion because it will give her daughter “something to do besides sewing and making snowflakes, crafty stuff.” Prior to the reunion, drummer Dave Lovering was spending a lot of time on the beach with a metal detector. Band leader Charles Thompson (aka Black Francis, aka Frank Black) listens to motivational tapes about being a better person.

Maybe you’re a sucker for the human moments. Charles and family visit the aquarium. Joey watches his new baby grow up via webcam from hotel rooms around the world. Kim reads a fan’s gift: a novel that includes a girl whose hero is Kim Deal.

Maybe you like the tunes. Don’t know?

loudQUIETloud has been (and, in some places, still is) playing in art-house theaters. It will be out on DVD in November.

Sunday, October 22, 2006

The Netflix Prize: Research Project as Product

Several people have asked what I think of the Netflix Prize, a $1 million contest to improve Netflix’s movie recommendations by 10%. For those expecting an “analyze the analytics” posting like Pandora vs., I’m going to throw you a curveball. I think the more interesting story here is about product marketing—and the Netflix Prize itself is the product.

Productizing a Research Project

From Netflix’s perspective, better recommendations mean higher profits. For those interested in the economics, Chris Anderson (author of The Long Tail) explains them.

But how do you make better recommendations? The usual approach would be to put some researchers on an internal project. Netflix had been doing that for years, but their researchers apparently hit the point of diminishing returns.

Then somebody had the idea of throwing open the problem to the rest of the world, saying something like, “There must be thousands of people with the skills, motivation, and computing hardware to tackle this problem. We just need them to work for us.”

There are indeed many experts in fields like statistical computing, machine learning, and artificial intelligence. There are even more dabblers who know just enough to be dangerous and could come up with answers the pros would never consider. The more people involved, the better the chance of success.

So from Netflix’s perspective, the problem evolved from creating a better algorithm to creating something, the Netflix Prize, that in turn would create Netflix a better algorithm. In essence, they built the Netflix Prize as a product: The “customers” were the prospective researchers; the challenge was to design and market something that would get these customers to participate.

Getting Attention: Eyes on the Prize

The $1 million prize is the most obvious feature. Having noticed the success (and now proliferation) of science-based prizes like the Ansari X Prize, Netflix no doubt liked the combination of free publicity such a prize generates along with the competitive dynamic that real money brings. The press and blogosphere were duly abuzz.

Making It Real: Heavy-Duty Data

Netflix offered up a huge, real-world data set of people’s movie ratings. This alone would have been enough to get lots of smart people playing with the data. Most aspiring data miners—who don’t happen to work at Netflix,, or other data-rich players—rarely if ever get a crack at data like this.

That said, Netflix slightly tainted this feature by “perturbing” an unspecified amount of the data “to prevent certain inferences being drawn about the Netflix customer base.” It’s not a big issue because a built-in limit exists to Netflix’s messing with the data: If the perturbed data ends up differing from the original data in important ways, Netflix could end up with a nightmare scenario where the winning algorithm exploits those differences and thus is not applicable to the original data. If that happened, Netflix would pay $1 million for an algorithm they can’t use on their actual data. As a result, we can safely assume the perturbed data is faithful to the original.

Talking Right: The Web Site

The Netflix Prize has its own Web site with a voice that is well tuned to its “customers,” the researcher types. The Rules and FAQ pages are not written in legalese, academic jargon, or various marketing dialects that no one speaks but that nevertheless appear in written form everywhere. The text is smart but informal, technical where necessary but not gratuitously so. To whomever wrote it, I salute you.

The Web site also includes a simple but effective leaderboard and community forum.

Giving Back: Winner Tells the World

Anticipating that most prospective researchers would immediately look for a catch—like what happens to the intellectual property you submit—Netflix summarizes the relevant terms in plain English: “To must share your method with (and non-exclusively license it to) Netflix, and you must describe to the world how you did it and why it works.” I expected something far more dire. Besides adding a touch of idealism to the proceedings, the bit about telling the world talks to the likeliest suspects for contestants: academics or corporate researchers who have strong professional incentives to publish their work.

Selling the Goal: It’s Only 10%

“10% improvement” is a clever packaging of the goal, because it’s a lot harder than it sounds. According to the FAQ, Netflix’s own algorithm—the one you’re trying to beat by 10%—is only 10% better than “if you just predicted the average rating for each movie.” In other words, a naive approach works pretty well. And while there is still a significant amount of distance between Netflix’s algorithm and perfection, anything close to perfection is impossible because people are not consistent raters, neither among each other nor individually over time. Thus, a major unknown is how much headroom exists to do better before one hits the wall of rating noise. Yet it is known that achieving the first 10% over a naive approach was far from trivial.

The Results So Far

Three weeks into the competition, more than 10,000 contestants have registered. Twelve contestants have cleared the 1% improvement mark, seven have cleared 2%, three have cleared 3%, and two have cleared 4%. The current leader is at 4.67% improvement, almost half way to the $1 million prize.

Given that Netflix was ready to let the contest run for ten years, and included yearly “Progress Prizes” for contestants that could exceed the best score by 1%, I’d say the Netflix Prize has exceeded expectations so far. And that does not factor-in the positive public relations and consumer awareness that came with the various press hits.

If the progress continues at the current rate, the contest will be over at the three-month minimum that Netflix has set. However, extrapolating from the current pace is risky. Every additional point of improvement will be harder, and we don’t know where the practical limit is.

Why It’s Different

There have been various other data-mining competitions. I’ll hazard a guess that Netflix’s is the first to be covered as a feature story in The New York Times and will easily be the largest ever in term of participation. (The New York Times story is already behind the pay wall, but a syndicated version is available at

The comparison with previous competitions is not fair, because other competitions were academic affairs, providing a little collegial competition at conferences. Yet Netflix’s success underlines how much more can be done when a data-mining competition becomes a means to do business.

By treating the Netflix Prize as a product, complete with features designed to maximize “customer” buy-in, Netflix created something far better than spending $1 million on its own researchers’ salaries over time. In that sense, the Netflix Prize is more interesting as a business method—spearheaded by spot-on product marketing—than a “Which algorithm will win?” story.

So I say to Netflix: Great idea, great execution. And to the contestants: May the best algorithm win.

Sunday, October 8, 2006

Organic, Inc. by Samuel Fromartz

The U.S. organic food movement started as counterculture but is now accelerating toward the mainstream. Samuel Fromartz’s Organic, Inc. tells the story of how and why.

A business reporter with a soft spot for healthy food, Fromartz pays due respect to both the organic purists, who decry their movement is being sold out to big business, and the organic popularizers like Whole Foods and Earthbound Farms, which have made megabucks spreading the organic gospel far and wide. Along the way, government agencies, agribusiness, and various others players make appearances.

Of the book’s themes, the one I found most interesting was the divergence between healthy food and organic food. In the early years of the organic movement these concepts were nearly synonymous. The goal was food that’s healthy for you and healthy for the planet; organic farming was a key means to the end. However, at the time, whether that healthy food tasted good was a secondary consideration, leading to the societal stereotype of “health food” as bland, killjoy food. But today, people increasingly believe they can have their organic cake and eat it too:

[O]rganic food persisted and grew precisely because the movement defined organic as a production method rather than a prescriptive diet such as Atkins, South Beach, the Zone, or Weight Watchers. The benefit came from eating the food, not from avoiding foods or counting calories. In this way, organic food became associated with a “healthy lifestyle,” which meant you ultimately decided what made you feel good. Whole Foods’s organic chocolate truffles epitomize this for me; they taste good because they contain chocolate, sugar, and saturated fat—not the healthiest mix. Yet by making them organically, Whole Foods tempered the “bad” quotient and transformed them into something “good.”

For the purists, organic chocolate truffles are on the slippery slope that leads to the organic Twinkie, a totemic symbol of the final organic betrayal. Yet for the popularizers, an organic Twinkie is still better, for you and the planet, than a traditional Twinkie.

Adding a twist to this debate, Fromartz notes:

[Organic popularizers] argued that making an organic Twinkie would “Grow the market! Convert more land!” The purists said, “No! Organic food should be kept pure and the Twinkie banned!” What neither side imagined was that consumers might buy conventional Twinkies and wash them down with organic milk, or that such mixed consumption might be preferable.

Per that last quote, Fromartz covers various consumer research that says organic currently is nowhere near an all-or-nothing choice even for price-insensitive people who could buy organic alternatives for most of their food products. Today, people are paying the premium for organic foods selectively, in areas where the benefit is perceived to be most important. For example, organic is particularly strong in baby food, even for lower-income purchasers.

Now, with Wal-Mart looking to drive down organic prices, the further mainstreaming of organic food is inevitable. You may not know it, but healthy-brand icons Odwalla, Boca Burgers, and Kashi are already owned by Coca-Cola, Kraft, and Kellog’s, respectively. And, by the way, not all of these healthy brands’ products are organic—a further reminder that the relationship between “organic” and “healthy” is not simple.

It’s a story with many chapters to play out. Organic, Inc. is a good guide to the action so far.

Tuesday, September 26, 2006

Disney Gadget Magnetism

I recently walked the exhibit floor of a consumer-electronics tradeshow for retailers. The exhibitors were mostly manufacturers, showing their latest products in hope of securing holiday-season orders from retailers.

In lieu of any breakthrough new gadgets, my attention turned to stuff that might interest my daughter. At 9 months old, pretty much anything she can chew on is interesting, but projecting forward a bit in her development, I noted this Mini Mouse USB drive.

Sorry for the blurry picture, but I wanted to show how small it was in relation to my hand. You can fold the plug back into the pink case, making it yet smaller. Here is a clearer view of the device (without Mini stenciled on the front) from the manufacturer, A-DATA of Taiwan.

They weren’t giving the USB drives away at the booth, but the A-DATA reps did provide me (on behalf of my daughter) what’s pictured below.

You might think these are Disney-logo’d Secure Digital (SD) cards, ranging in capacity from 256 megabytes to 2 gigabytes. You might also think this is one of those Asian-market peculiarities, like the 22,000 unique products adorned by Hello Kitty logos. Think again.

First, these are actually refrigerator magnets of Disney-logo’d SD cards, sized exactly to SD card specifications. Second, although they are not yet available in the United States—as SD cards or magnets, apparently—you can score other Disney-logo’d SD cards (with Disney content) at Wal-Mart today.

And finally, a semi-related thought: Some analysts predict that solid-state media (like that in SD cards) will soon eclipse magnetic media (like that in hard drives). If so, we have now seen how the “magnet” in magnetic media can live on, as the tchotchke version of solid-state media devices, which themselves feature children’s cartoon characters.

Tuesday, September 12, 2006

Alaska’s Width

Until today I was not aware that Alaska is as wide as the lower 48 states, extending from San Francisco to Jacksonville, Florida. A colleague’s office has a map similar to this one, which illustrates the point.

It’s easy to forget the Aleutian Islands’ 1,200 miles of westward reach, as well as the eastward span of the Alaska Panhandle.

[A larger version of the map is here. Thanks to Doug L. for the inspiration.]

Sunday, September 10, 2006

CarMax Does Data Better

The September 2006 issue of Business 2.0 has an article, “The Wal-Mart of Used Cars,” about CarMax, an analytics-driven chain of superstores for used cars.

In the same way that Wal-Mart revolutionized the logistics of retailing, CarMax set out to nail the perfect mix of inventory and pricing through exhaustive analysis of sales data. Its homegrown software helps CarMax determine which models to sell and when consumer demand is shifting. Each car is fitted with an RFID tag to track how long it sits and when a test-drive occurs....

Without the data, stocking CarMax lots would be a logistical nightmare. Each store carries 300 to 500 cars at any given time, and unlike Wal-Mart, the company has no vendors to stock its “shelves.” Instead, CarMax depends on 800 car buyers, who draw on the company’s reams of data to appraise vehicles.

The article doesn’t mention it, but I suspect that CarMax’s situation is one where the analytics appear to be the competitive advantage yet the real advantage is the data feeding the analytics. That is, analyzing sales and inventory data a la CarMax involves a mature set of techniques and tools; it’s highly unlikely that CarMax has found a new analytics secret sauce. Far likelier is that CarMax collects more and better data than the competition, allowing those mature analytical techniques to yield better results.

For example, consider two big advantages CarMax has in data collection:

  • It’s a network of superstores, each of which carries many more cars than a typical dealership. This scale means CarMax can sample the marketplace better than other used-car dealers.
  • CarMax’s car buyers act as a data-normalizing force, ensuring that the details of cars in CarMax’s database are classified in a complete and consistent way. This advantage is key compared to the obvious alternative of scraping eBay and other online sources of used cars, which together would comprise a sample even better than CarMax’s. The problem is, the greater quantity of data comes at the cost of much lower quality. There  would be no common definition of key attributes like “good condition” or, for that matter, no standards for what attributes to include. That means noisy, messy data—just the thing to make otherwise good analytics look bad.

So let CarMax be a reminder: Amid all the attention Internet-based businesses get for their unprecedented data opportunities, traditional businesses like used-car lots can be networked and data-intensified to compete in new ways as well.

Sunday, September 3, 2006

Follow Up: Harvesting Power from Human Motion at Large Scale

Last year, I speculated about whether it would be possible to harvest power from human motion on a large scale. “On a large scale” was the key part, since devices already exist to harvest power on a small scale, such as combat boots that generate a small amount of power while the wearer walks. To provide a contrasting example, I asked whether one could harvest the vibrational motion of a highway overpass as vehicles passed over.

Lately, architect Claire Price has been in the news with plans to try something along these lines. Here are a few excerpts from a recent BBC article by her:

Reading this, your body at rest is emitting about 100 watts into the environment. If you’re sitting in an open plan office, count the number of surrounding colleagues and you don’t need to be a maths genius to appreciate the possibilities of tapping into all that wasted energy....

“[H]eel-strike” generators, powered through the pumping motion of a footstep, can be embedded within a boot heel. These devices currently achieve upwards of 3 - 6 watts of power output. So the 34,000 commuters who pass through Victoria underground station at rush hour, for example, could theoretically generate enough energy to power 6,500 LED light fittings - energy that today is disappearing into the ground....

Elsewhere in the world, researchers are also looking into how energy harvesting devices can be embedded within roads or how they can be used to create a self-powering heart pacemaker or even an artificial limb....

We [Price’s UK-based firm, The Facility Architects] are applying and testing our ideas practically within a building project within the next year, including a sprung floor fitted with heel-strike generations to harvest the energy from people walking across it. This power output will then be wired back to provide the lighting within that building.

We also plan an LED light fitting with its own micro generator. This unit will convert vibrations from passing trains, lorries or planes to provide continuous light without the need for wiring into the grid.

As of last year, I was unable to find anything on the large-scale version of harvesting power from human motion. So I’m glad that whatever work was/is being done is now in the spotlight.

I hope it succeeds.

Monday, August 28, 2006

Piracy in Peru

The BBC has an interesting article about piracy (of the intellectual-property type) in Peru, where “the legal music market has collapsed, unable to compete with 98% of all music being sold on the black market.”

A few more excerpts:

More than half of Peru’s economy is made up of unregulated businesses that do not pay tax. More than half the 28 million population lives below the poverty line and simply cannot afford the genuine goods....

150 police officers armed with tear gas and riot control equipment who raided one well-known pirate market in Lima were simply fought off by the well-organised black marketeers.

And finally:

There is a story circulating in Peru, which could well be true, that another Peruvian writer, the popular Jaime Bayly, was waiting at traffic lights when black marketeers offered him a pirate copy of one of his own books.

Recognising the author from the photo on the back cover, the vendor, without even pausing to blush, offered him a discount.

Read the full article here.

Sunday, August 20, 2006

David Foster Wallace Serves It Up

This morning’s New York Times had a welcome surprise with David Foster Wallace’s “Federer as Religious Experience.” Wallace is one of the most innovative writers around, and if you like his style, this piece won’t disappoint.

About Wallace’s style—well, let’s take an extended example:

Tennis is often called a “game of inches,” but the cliché is mostly referring to where a shot lands. In terms of a player’s hitting an incoming ball, tennis is actually more a game of micrometers: vanishingly tiny changes around the moment of impact will have large effects on how and where the ball travels. The same principle explains why even the smallest imprecision in aiming a rifle will still cause a miss if the target’s far enough away.

By way of illustration, let’s slow things way down. Imagine that you, a tennis player, are standing just behind your deuce corner’s baseline. A ball is served to your forehand — you pivot (or rotate) so that your side is to the ball’s incoming path and start to take your racket back for the forehand return. Keep visualizing up to where you’re about halfway into the stroke’s forward motion; the incoming ball is now just off your front hip, maybe six inches from point of impact. Consider some of the variables involved here. On the vertical plane, angling your racket face just a couple degrees forward or back will create topspin or slice, respectively; keeping it perpendicular will produce a flat, spinless drive. Horizontally, adjusting the racket face ever so slightly to the left or right, and hitting the ball maybe a millisecond early or late, will result in a cross-court versus down-the-line return. Further slight changes in the curves of your groundstroke’s motion and follow-through will help determine how high your return passes over the net, which, together with the speed at which you’re swinging (along with certain characteristics of the spin you impart), will affect how deep or shallow in the opponent’s court your return lands, how high it bounces, etc. These are just the broadest distinctions, of course — like, there’s heavy topspin vs. light topspin, or sharply cross-court vs. only slightly cross-court, etc. There are also the issues of how close you’re allowing the ball to get to your body, what grip you’re using, the extent to which your knees are bent and/or weight’s moving forward, and whether you’re able simultaneously to watch the ball and to see what your opponent’s doing after he serves. These all matter, too. Plus there’s the fact that you’re not putting a static object into motion here but rather reversing the flight and (to a varying extent) spin of a projectile coming toward you — coming, in the case of pro tennis, at speeds that make conscious thought impossible. Mario Ancic’s first serve, for instance, often comes in around 130 m.p.h. Since it’s 78 feet from Ancic’s baseline to yours, that means it takes 0.41 seconds for his serve to reach you.9 This is less than the time it takes to blink quickly, twice.

(If that last paragraph’s density attracted you, look for the 258-word sentence in the piece’s second paragraph.)

In our excerpt above, we have several interesting features:

  1. The first paragraph is a nice conceptual turn. Wallace renders quaint the “game of inches” cliché by explaining the micrometric stakes of each racket impact. Wallace then consolidates the concept with the rifle analogy, which makes it all seem obvious.
  2. “Imagine that you, a tennis player, are standing just behind your deuce corner’s baseline.” You probably don’t know what your “deuce corner’s baseline” is, but the meaning doesn’t matter. Unlike the usual use of jargon, which is like a locked door to outsiders, Wallace’s use of jargon here is more like wallpaper. It contributes to the atmospherics of a room you’re already in.
  3. He is addressing you directly—yeah, “you.” It juxtaposes well with the paragraph’s technicalishness. (No, that’s not an official word, but somehow it’s right for this occasion.)
  4. As for the long middle of our excerpt’s second paragraph, it’s a big set-up. He enumerates the myriad factors that go into returning a pro serve only to deliver the punchline that you’ve only got 0.41 seconds to do the right thing—“the time it takes to blink quickly, twice.”
  5. Finally, at the end of the excerpt’s second-to-last sentence, is a marker for footnote 9. That footnote is a preemptive strike against those who might question whether Wallace’s calculation of 0.41 seconds suffers from omitting the additional distance the ball travels from the bounce. This footnote has its own footnote. Such footsie with footnotes is a Wallace trademark. If only one could trademark a trademark.

On a larger level, Wallace makes some structural gambits. For example, a child that contracted cancer at age two appears first as a bit of innocuous reportage, then later as a jarring counterpart that interrupts the story, and finally in the last paragraph, not of the main piece but of the footnotes, as an explicit connection to the main theme.

A breezy read this piece is not, yet Wallace’s technical skill brings a conversational tone to the most entertainingly arcane points. Call it obsessive-casual.

So set aside 10 minutes and read the piece. Even if you can’t get into Wallace’s style, you’ll find enough little gems along the way to make it worthwhile—for example, the description of Wimbledon line judges “in their new Ralph Lauren uniforms that look so much like children’s navalwear.”

Check it out: Federer as Religious Experience

Sunday, August 13, 2006

Wine Ratings: Drunk on Numbers?

In “Wine Ratings Might Not Pass the Sobriety Test,” Gary Rivlin of The New York Times examines the 100-point rating systems that have become pervasive in the wine business. Some highlights:

A rating system that draws a distinction between a cabernet scoring 90 and one receiving an 89 implies a precision of the senses that even many wine critics agree that human beings do not possess. Ratings are quick judgments that a single individual renders early in the life of a bottle of wine that, once expressed numerically, magically transform the nebulous and subjective into the authoritative and objective.

When pressed, critics allow that numerical ratings mean little if they are unaccompanied by corresponding tasting notes (“hints of blackberry,” “a good nose”). Yet in the hands of the marketers who have transformed wine into a multibillion-dollar industry, The Number is often all that counts. It is one of the wheels that keep the glamorous, lucrative machinery of the wine business turning, but it has become so overused and ubiquitous that it may well be meaningless — other than as an index of how a once mystical, high-end product for the elite has become embroidered with the same marketing high jinks as other products peddled to the masses.

Although four- or five-star rating systems for wine existed before, Robert Parker originated the modern 100-point system in 1978. Since then, it has inspired many imitators, to the point where a single wine may be rated by a dozen different 100-point systems.

Cork dorks say that even today, the only scores that count are those of the first two publications to embrace the 100-point score: Mr. Parker’s Wine Advocate and Mr. Shanken’s Wine Spectator. That has not stopped retailers from cherry-picking high scores no matter who comes up with them. uses no less than seven sources when fishing for members of the 90+ club, including The Wine News, the Connoisseurs Guide and the International Wine Cellar. And in a pinch, is not above turning to an eighth source.

When promoting Capcanes 2001 Costers del Gravet, a Spanish wine, for instance, quoted a well-regarded publication, International Wine Cellar, written by Stephen Tanzer, in its review. But the source of the 91 that earned the 2001 Costers a place on its 90+ list was itself. (The company did not return a call seeking comment.)

Not only are these systems open to overt manipulation, but even the most respected and systemic raters communicate their biases, if inadvertently:

Mr. Parker and the critics from Wine Spectator tend to save their highest ratings for robust-tasting, more intense wines....“That’s another way numbers are misguiding people,” said Mr. Tisherman, the former Wine Enthusiast editor who now calls himself a “recovering critic” and helps clients sponsor wine-tasting parties. “A 96 is better than an 86, but not if you want a light-bodied wine, and Americans tend to prefer light-bodied wines. Yet those are also the wines least likely to get a good score.”

Although I’ve provided several tastings from the article, I’d recommend you quaff the whole thing. It has precision, balance, concentration, power and finesse, with plush layers of currant, mocha, berry, mineral and spice—oh wait, that last part is not about the article; it’s from the description of Wine Spectator’s 2005 wine of the year, Joseph Phelps Insignia Napa Valley 2002.

Did I mention it scored a 96?

[Update, 11/17/2009: The Wall Street Journal covers the results of controlled experiments to determine the (in)consistency of wine judging. One analysis of the same wines’ results across multiple wine competitions showed near-random outcomes.]

Monday, August 7, 2006

Vanity Sizing

As part of my day job, I receive various news about the retailing industry—from which, I bring you the following abuse of numbers, apparently particular to women’s clothing.

ABC News’ Good Morning America recently reported about “vanity sizes” in women’s clothing:

[C]onsidering pop culture’s obsession with thinness, for many women no size is too small.

“I had, one time, a client who said, ‘I get into a 10 now,’ ” said Bridgette Raes, a fashion consultant. “She was originally a size 14. When she could get into a 10, and then into an 8, she was like, ‘I know that it was a lie, I know that this really isn’t a 10, but I love the fact that the label says 10.’”

That may be the thinking behind vanity sizing — which means clothes are cut bigger, but sized smaller.

“Manufacturers and brands are trying to really make women feel good about buying their brand,” said Marshall Cohen, a retail industry analyst. “If you were worried about being a size 14 or 16, I can make you feel great by a size 10 or 12.”

One size 0 could have a waistline of 28 inches, which is, according to American Society of Textile and Material, a size 10.

It’s not a new topic. This article, from The Arizona Republic in 2004, indicates that vanity sizing has been around a long time, and when efforts periodically emerged to (re)standardize women’s sizing, the apparel manufacturers ignored them. By contrast, men’s clothing sizes have largely stayed the same over time.

I suspect most women understand vanity sizing, and per the article, many appreciate it. So among the sins of misusing numbers, stretching the standard-sizing truth is like a white lie that everyone’s in on. After all, if the scale doesn’t lie, clothes can at least fib.

Sunday, July 30, 2006

Sign Usability: San Francisco Does the Right Thing

Last month, I noticed this road-sign usability issue (4th Street, between Folsom and Harrison in San Francisco):

The sign, which was about three feet behind the pole, points the way toward the local baseball park, AT&T Park, the name of which has changed an average of once every two years since its opening in 2000.

A week later, I saw that the City of San Francisco had moved decisively to improve the sign situation...

...not only making the sign visible but also following the wisdom of the previous sign, which restricted itself to the generic term “Ballpark.”

Old School Mash-Ups

It took my entire life, up until last week, to realize that “The Alphabet Song” (the one that goes, “A, B, C, D, E, F, G...”) and “Twinkle Twinkle Little Star” have the same tune. Since then, I have asked several people whether they ever noticed this. No one had.

A few details and links: “The Alphabet Song,” originally from the 1830s, uses the same tune as “Twinkle Twinkle Star,” which in turn is a combination of the 1806 poem, “The Star,” by Jane Taylor and the 1761 French melody “Ah! vous dirai-je, Maman.”

If you want to see the various other ways the tune has been repurposed, including by Mozart in 1778, follow the links above.

Monday, July 24, 2006

Why You Will Probably Outlive the Average Life Expectancy

The average life expectancy in the United States is roughly 75 years for males, 80 years for females. Chances are, you will exceed the number that applies to you.

Because life-expectancy numbers are often based on recent mortality rates, you might be thinking that future advances in medicine will give you an edge. While that may be true, the surprise is that you already have an advantage over the original numbers just by being alive to read this.

Think of 100,000 people born the same year as you. A certain percentage of that original population will die each year, as represented by the distribution below. (The original numbers are from the U.S. Social Security Administration, from which I derived the measures and charts on this page.)

It’s not a happy thing, but each bar in the chart indicates the percentage of the original 100,000 people that died, or are projected to die, in each year. You don’t know which future bar has your name on it, but you do know that all the bars to the left of your age no longer apply to you. As a result, your current life expectancy is computed against the average of the remaining population.

In turn, that means your life expectancy is always increasing and that you have exceeded the original average practically from the beginning, as illustrated below (based again on the same Social Security data).

Note that if you are a 40-year old male, you’re already up more than two years from the original average. If you are a 40-year-old female, you’re up about a year and a half. And for those males that live into their mid-80s, they will have closed most of the gap in life expectancy versus females.

Of course, all this is based only on the male and female averages. Your life expectancy will rise or fall based on other important attributes. For example, if you are a chain-smoking alcoholic who lives on a Superfund site, you might want to lower your expectations.

Nevertheless, everything else being equal, this is a subject where it’s nice to know the odds are with you.

Sunday, July 16, 2006

George Harrison, Pirate?

While on vacation, I ran across A History of Pirates. I didn’t read it, but the cover caught my eye: Is that an All Things Must Pass era George Harrison as a pirate? You be the judge.

Small Town Volunteerism

I live in a dense urban setting. But once or twice a year, we decamp to a small town amid the corn fields of Illinois, where my wife is from.

It’s a place where you run into people like the guy who is a member of his town’s volunteer fire department. With a population of 2,500, his town is even smaller than the one we visit.

A volunteer ambulance driver, he was up at 2:30am the night before, responding to a random emergency. He was joined by two other paramedics. They all live close to the fire station, so they can get there within three minutes of their pagers’ ringing.

This type of volunteerism combines generosity with self-reliance in a way that’s natural to small communities. And while I’m not suggesting that everyone in these small towns is ready to charge out into the night for someone in need, I admire those who do.

Sunday, June 25, 2006

By the Numbers: Komar and Melamid’s “People’s Choice”

Last week’s posting, Data Visualization as Art, reminded me of another topic where art and numbers intersect. In this case, it’s Komar and Melamid’s “People’s Choice” project. I say “project” because People’s Choice comprises many different paintings, each of which is the result of market research into the “most wanted” and “least wanted” paintings of various countries. The data from the market research is part of the displayed art too.

This 1999 New York Times review of the project’s accompanying book, Painting by Numbers: Komar & Melamid’s Scientific Guide to Art, explains the idea:

Noting the gulf that yawned between a democratic society and its self-consciously elitist art world, Komar and Melamid decided to find out for themselves what people who were not a part of that world liked to see in pictures. Accordingly, they availed themselves of that scorned but ubiquitous resource, the opinion poll. Beginning late in 1993, telephone researchers hired by them questioned 1,001 Americans of all demographic shadings, asking them about their preferences as to color, dimensions, settings, figures — 102 questions in all. Sixty-seven percent of respondents liked a painting that was large, but not too large — about the size of a dishwasher (options ranged from “paperback book” to “full wall”). A whopping 88 percent favored a landscape, optimally featuring water, a taste echoed by the majority color preferences, blue being No. 1 and green No. 2. Respondents also inclined toward realistic treatment, visible brushstrokes, blended colors, soft curves. They liked the idea of wild animals appearing, as well as people — famous or not — fully clothed and at leisure....Armed with this information, Komar and Melamid started to paint.

Below is Komar and Melamid’s “Most Wanted” painting for the United States, reduced down from its original “dishwasher-size” canvas. It features the attributes just mentioned (yes, that’s George Washington posed in the middle):

The image is from the Dia Art Foundation’s site, which has a Web version of People’s Choice, including the survey results.

Back to the New York Times review:

Komar and Melamid’s project is conceptualism at its most elegant and
effective, a little bomb thrown into the works. It puts into question
not only the relation between art and ordinary people, and the meaning
of ‘‘the market,’‘ but also the ambiguity of opinion polls and, by
extension, the discordance between the individual and the mass.

Finally, the Dia Foundation’s Director’s Introduction quotes Melamid:

In a way it was a traditional idea, because a faith in numbers is fundamental to people, starting with Plato’s idea of a world which is based on numbers. In ancient Greece, when sculptors wanted to create an ideal human body they measured the most beautiful men and women and then made an average measurement, and that’s how they described the ideal of beauty and how the most beautiful sculpture was created. In a way, this is the same thing; in principle, it’s nothing new. It’s interesting: we believe in numbers, and numbers never lie. Numbers are innocent. It’s absolutely true data. It doesn’t say anything about personalities, but it says something more about ideals, and about how this world functions. That’s really the truth, as much as we can get to the truth. Truth is a number.

You might as well consider that commentary part of the piece too.

In art textbooks of the future, look for People’s Choice to join Warhol’s Campbell’s Soup Can paintings as emblems of (post)modern consumer society.

Sunday, June 18, 2006

Data Visualization as Art

Will there be a future Rembrandt whose medium is data visualization? I was thinking about this after encountering Jesse Bachman’s “Death and Taxes: A visual look at where US tax dollars go.”

According to the summary, Bachman spent close to a year researching and creating this visualization of where the U.S. government spends money. I have reproduced a small version below...

...but I highly recommend you scroll around the big version to appreciate the piece’s detail, clarity, and artistry.

I use the word artistry with the idea that some data visualizations qualify as art. Bachman’s piece clearly has artistic intent, from its political message to the name of the site it’s on (deviantART). And independent of the data’s message, the visual design and rendering is...well, artistic.

By comparison, below is an infographic on a similar subject. It is nicely done but feels more like good craft than art. (See here for the full-page version.)

(Yes, I realize at this point that we are ankle-deep in the “What is art?” swamp. Maybe Bachman’s stuff is really “graphic design”? Or can graphic design be art? And so on. For the rest of this post, I promise to restrict myself to sloshing around the edge of the swamp rather than going deeper.)

Bachman is selling posters of “Death and Taxes,” so you can hang it on your wall, art-like. Similarly, data-visualization titan Edward Tufte’s Web site has a “Fine Art” section where you can order large, high-resolution prints of his work.

And then there’s Mark Lombardi, whose work I saw a few years ago in an art gallery. He researched and created highly detailed graphs showing the connections between people and events. Here’s an example of one of his works, “george w. bush, harken energy, and jackson stevens c.1979-90, 5th version.”

Here is a close-up of one little part:

This piece is “only” 20 x 44 inches. Lombardi’s work got as big as 5 feet by 12 feet, dense with connections. Everything he did was researched and drawn by hand. Despite working in the computer age (up until his death in 2000), he used index cards for the research and pencil/graphite on paper for the pieces. See here for more examples as well as, at the bottom of the page, Lombardi’s commentary.

The schematic-diagram look of Lombardi’s work was an artistic choice, a visual antiseptic that left only facts on the page. Because many of his pieces involved scandals, the connections often intersected the famous (George W. Bush and Bill Clinton each got caught in a Lombardi web) with the infamous, leaving the viewer to decide the significance.

I bring up Lombardi because his work and Bachman’s “Death and Taxes” strike me as opposite ends of the “data visualization as art” spectrum. While both render data clearly and with a message—that is, they are not using data to drive abstract art (a whole other category)—Bachman does so with overt artistic technique whereas Lombardi employs the covert artistry of minimalism.

So if certain data visualizations can be art, we might as well ask whether history will judge a future data-viz artist as a master, on par with a Rembrandt. I think it could happen because, when anointing great artists, art historians often pick artists whose work is representative of their time. This being the information age, data-viz art looks suspiciously representative to me.

[I originally found “Death and Taxes” via a write-up on]

Sunday, June 11, 2006

Unexpected Numbers from the Economist’s 2006 Pocket World in Figures

Next time you come up short for cocktail-party chatter, just remember: “Equatorial Guinea.”

I take that lesson from the Economist magazine’s 2006 Pocket World in Figures, a book that compiles a wide range of numbers about various countries and regions of the world. Following are a few unexpected results that caught my eye.

Which country had the highest economic growth from 1993 to 2003, measured by the average annual percentage increase in real Gross Domestic Product (GDP)?
Everybody knows about China’s big growth, but at 8.9% it’s only enough for 4th place. The winner is Equatorial Guinea at 25.9%, due to its relatively recent exploitation of oil reserves. The other two ahead of China (Bosnia and Liberia) both experienced bounce-back growth after wars.

Which country is the largest donor of bilateral and multilateral aid, as a percentage of GDP?
If you’re expecting a Scandinavian country to be the winner here, you are close. Norway (0.92% of GDP) and Denmark (0.84% of GDP) are numbers two and three. But number one is Saudi Arabia at 1.11% of GDP. What about the United States? Although it is by far the largest donor nation in absolute dollars, it ranks 26th when measured as a percentage of GDP (0.15%).

Which country is most energy efficient, in terms of GDP per unit of energy use?
This one is measured in “purchasing power parity dollars per kilogram of oil equivalent.” I take that to mean economic output per energy input. The winner is Peru, and the rest of the top-10 countries are strange bedfellows: Hong Kong, Bangladesh, Namibia, Morocco, Uruguay, Colombia, Costa Rica, Ireland, and Italy.

Which country has the greatest number of cars per 1,000 people?
Unless you somehow already know the answer, don’t bother guessing. The winner is Lebanon at 732 cars per 1,000 people. The United States is 14th at 481 per 1,000.

That result about car-happy Lebanon begs to have its source checked. However, specific sourcing is absent from 2006 Pocket World in Figures, an unfortunate omission even if the book’s purpose is more toward entertainment than serious reference material.

So there you have it. If, after dishing these facts and figures, you aren’t the life of the party, then you’re not partying with the Council on Foreign Relations.

Sunday, June 4, 2006

Latitude with Attitude

What’s wrong with this picture?

It’s a Dell Latitude notebook computer with an Apple decal over the Dell logo. If you look carefully, you can see the Dell logo showing through.

Because some of Apple’s notebooks are a similar color and have the logo in the same place, this customization is a particularly clever visual hack.

The perpetrator told me he was inspired by his love of iPod/iTunes—an example of a brand that’s loved trumping a brand that’s respected.

Sunday, May 28, 2006

Wal-Mart and Economies of Density

Today’s a-ha moment is brought to us by Thomas J. Holmes, professor of economics at the University of Minnesota. In an interview about his paper “The Diffusion of Wal-Mart and Economies of Density,“ he says:

Holmes: Briefly, Wal-Mart has an incentive to keep its stores close to each other so it can economize on shipping. For example, to make this simple, just think about a delivery truck: If Wal-Mart stores are relatively close together, one truck can make numerous shipments; however, if the stores are spread out, you wouldn’t have that benefit. So, I think that the main thing Wal-Mart is getting by having a dense network of stores is to facilitate the logistics of deliveries.

There are other benefits, too. Opening new stores near existing stores makes it easier to transfer experienced managers and other personnel to the new stores. The company routinely emphasizes the importance of instilling in its workers the “Wal-Mart culture.” It would be hard to do this from scratch, opening up a new store 500 miles from any existing stores....

For the sake of this discussion, let’s say that Wal-Mart’s most desirable location, or “sweet spot,” when it was starting its business was a town the size of 20,000. One strategy Wal-Mart could have pursued would have been to go around the country opening stores in its sweet spot locations and then later go back and “fill in” less desirable locations. With this alternate strategy, the first store in Minnesota would have opened a lot sooner than it actually did, as there certainly are locations in Minnesota right in Wal-Mart’s sweet spot. But with this strategy, stores would initially have been much more spread out. Wal-Mart would have lost the gains from having a dense network of stores.

Instead, Wal-Mart waited to get to the plum locations until it could build out its store network to reach them. It never gave up on density.

[Interviewer]: And when you see what it’s done, with the benefit of hindsight, it seems like the right thing to do, almost the obvious thing to do. But that would suggest that other retailers would have also recognized the benefits of density and should have engaged in the same behavior. Did Wal-Mart invent, if you will, this retailing idea?

Holmes: It is useful to contrast Wal-Mart with Kmart, as both opened their first stores in 1962. Wal-Mart, from the very beginning, was different from Kmart. Wal-Mart built up its store network gradually from the center out; Kmart (and Target, for that matter) began by scattering stores all over the country. Early on, Wal-Mart focused on logistics, with things like daily deliveries from its distribution centers, early adoption of advanced communication technology and so forth. Kmart did not do these things. A customer going into these two stores might not be able to see much of a difference between the two stores. But underneath, in the way that merchandise was getting on the shelves, these stores were very different.

And for a visual kicker, see this 26-second video, plotting the locations of Wal-Mart stores from 1962 to 2004.

[I found the references to the interview and video at Marginal Revolution. There are working versions of Holmes’ paper online, but since the URLs have a non-permanent feel about them, I suggest you just search for the paper’s title in your favorite search engine.]

Sunday, May 21, 2006

GM Gets Shifty With Numbers

General Motors (GM) gets shifty with numbers in a recent print ad titled “Change is in the air.” I saw it on page 53 of The Economist magazine’s U.S. edition dated May 13-19, 2006.

The ad begins:

We’re changing a lot of things at GM these days. Even people’s minds. Take the environment. Today we lead the industry in the number of models that get an EPA estimated 30 mpg or better on the highway. More than Toyota or Honda.

The ad does not mention that GM also leads the industry in the number of models that get an EPA estimated 29 miles per gallon or less. What? It turns out GM can win either side of this issue because GM has significantly more models than any other car company. In other words, “We have the most models above 30 mpg! We have the most models below 30 mpg! How? Because we have, by far, the most models!”

To make a more meaningful comparison, let’s look at the percentage of each car company’s models that get 30 mpg or better on the highway. Using the Environmental Protection Agency’s data for model-year 2006 cars, we find that 14.1% of GM’s models get 30 miles per gallon or better on the highway. That’s less than half the 30-mpg+ percentage of either Toyota (36.7%) or Honda (36.4%). It’s also less than the average 30-mpg+ percentage across all car models in the database (17.3%).

Thus, GM’s “leadership” doesn’t look so good from this, more meaningful angle. (For those who remember the Arizona State University ad that claimed superiority over Stanford and several Ivy League schools in the number of freshmen who were top-10% high schoolers, this GM ad is abusing numbers in a similar way.)

For the record, below is a ranking of car brands by the percentage of models that get 30 mpg or better on the highway. You can create this analysis from the 2006 EPA data file using an Excel PivotTable. The original data represents each GM brand separately, but I have added a line at the end that totals the GM brands listed in the ad (Buick, Cadillac, Chevrolet, GMC, Hummer, Pontiac, Saab, and Saturn).

Sunday, May 14, 2006

Rosum: A Down-to-Earth Version of GPS

At a recent event, I met someone from Rosum, a company with a new twist on Global Positioning System (GPS) technology. I know nothing about the company’s business outlook, but their technology is a great example of elegant design.

A little background: GPS is the satellite system that allows a receiver to pinpoint his or her position anywhere on Earth. The receiver locates itself by knowing the position of at least three satellites and the time it takes a signal to reach the receiver from each satellite. This page’s section on “2-D Trilateration” has a good explanation of the general concept, which is easier to understand as a two-dimensional example than the GPS system’s 3D version.

GPS works best when a relatively unobstructed path exists from the receiver to the satellites. Thus, reception in areas with hills or high buildings can be problematic, as is reception inside buildings.

Rosum’s plan is to run a GPS-like system using over-the-air television signals, from television broadcast towers. I like the plan because of its elegance at multiple levels:

  • There are already enough broadcast towers around most urban areas, saving a huge amount of time and money that would otherwise go to creating infrastructure.
  • Television signals already include precise synchronization information, which is important for enabling the trilateration.
  • Compared to GPS signals, over-the-air television signals are higher-power and lower-frequency, both of which improve reception indoors and amid uneven terrain.
  • Television towers don’t move, as satellites do; over-the-air television signals’ relatively short path to the receiver are subject to less distortion than satellite signals from space. These factors reduce system complexity.

Of course, Rosum’s “GPS-TV” concept has its own challenges—the company has been around since 2000 working on them. But, like hearing glasses, this is one of those ideas that deserves mention just for its cleverness.

The company plans to make money from a variety of applications, which you can view here. The technology is currently in field testing.

Thursday, May 11, 2006

Dave Ibsen, Webby Winner

Congratulations to my friend Dave Ibsen, whose 5 Blogs Before Lunch won “Best Business Blog” in the 2006 Webby awards.

Dave’s blog primarily covers marketing, advertising, and branding topics. It’s a companion to his consulting practice, which I recommend to those in need of technology- or consumer-marketing insight.

Given who else won Webby awards this year, Dave seems to be in good company.

Tuesday, May 9, 2006

Ormerod’s Why Most Things Fail and Schelling’s Segregation Models

I recently read Paul Ormerod’s Why Most Things Fail. Focusing on the frequent failure of companies and government policies, Ormerod argues that the environment in which these entities exist is so complex and unpredictable that even the best laid plans cannot reflect what will really happen. Between this complexity and the competition of many players’ laying plans, the ones that succeed often get there by luck. And once successful, the only viable strategy for long-term survival is constant adaptation via trial and error.

For a review of the book, I’ll just agree with this Financial Times review’s mixed bag of praise and criticism. However, I will highlight one of the book’s better examples.

Economist Thomas Schelling created a model of how racial segregation happens. First, he posited a large grid, like a chessboard but much larger. Each square either has a red person’s house, a green person’s house, or nothing. A person will move if a certain number of his immediate surrounding neighbors are a different color—the number is fixed across all people, representing a societal level of tolerance for different-race neighbors.

It’s a complex system because a single person’s move can have cascading effects on the former and new neighbors, which in turn can have cascading effects. Thus, the results are not obvious from the initial conditions.

Northwestern University has a Schelling-inspired segregation simulator, where I generated this before-and-after combination.

Those familiar with cellular automata (CA) may have already expected that the few rules would give rise to order from randomness, but the nature of the order is surprising. For my model, I assumed that each person wanted at least 40% of neighbors to be the same race, yet the system ended up 85% segregated. Due to the system’s interconnections, the relatively weak individual preferences, upon interaction, led to a strongly segregated society, a result typical of Schelling segregation models.

This type of result—where a societal outcome emerges nonlinearly from complex interactions—is what Ormerod sees everywhere, albiet in yet more complex form than this model.

And finally, if you’re thinking that the Schelling model has its own kind of predictability, it does at an aggregate level. Ormerod does not give this point enough weight with regard to Schelling’s work, but elsewhere in the book he describes how company failures are predictable in the aggregate by a power law distribution similar to that of biological species extinctions. However, these high-level patterns won’t tell us when particular companies will fail or, in a real-world city, which people will move exactly where.