Friday, December 28, 2007

Lane and Oreskes’ The Genius of America

In The Genius of America, Eric Lane and Michael Oreskes write:

The message we are hearing is that our government does not work. The message we should be hearing is that our government is a reflection of our own divisions. What we need is not a new system of government. We need a renewed willingness to work out our differences and find compromises, consensus and that other now-popular phrase, common ground.

While that last sentence might sound like a naive call to higher principle, the authors argue that finding compromise amid conflicting interests is what the United States has done superbly well for more than 200 years—but not so well lately. They believe that political developments of the past few decades are starting to undermine the U.S. Constitutional system, which manages conflicts by a process of checks, balances, and compromise.

For example, Lane and Oreskes critique the rise of voter initiatives, such as California’s Proposition 13:

It won in a landslide among those who voted. But even so, fewer than 50 percent of California’s registered voters cast a yes vote for the proposition. Thus a minority of registered Californians decided to reduce tax burdens and and limit the capacity of the government to increase its future revenues.

It was appealingly easy to adopt this initiative. There was none of the scrutiny of normal legislative process nor the need for coalition building and compromise. No committee of one legislative house and then of the second could block its path to a vote. No requirement that a bill pass two separate legislative houses stood in the way. No executive stood waiving a veto pen. No colleagues were hovering around demanding compromise for support. No lobbyists were demanding changes. No time-consuming hearings from which concerns of the public or experts had to be addressed and weighed. No long public debates in which legislators had to explain why they favored or disfavored the issue. No arcane procedural rules blocked a vote either in a committee or legislative house. No worries that a wrong step might anger constituents. No competition among this idea and the thousands of other ideas wanting legislative attention. In short, the initiative did not have to pass through any of the screens that in the legislature protect against the tyranny of the majority or even the minority and require deliberation and consensus.

Although voter initiatives are a state-level phenomenon, the authors note the existence of a proposed National Initiative, associated with the long-shot presidential campaign of former Senator Mike Gravel. The authors are not forecasting doom at the hands of Mike Gravel, but they hold him as a symbol of what’s going wrong: a former Senator trying to make government better by short-circuiting it.

Another interesting example from relatively recent history:

In the mid-1980s, politicians were worried about the federal deficit and about the danger that voters would punish them for it. But there was no consensus on the steps to take to curb it: raise taxes, cut defense spending, cut social spending. So they adopted Gramm-Rudman, what [political author John] Ehrman calls “one of the most disgraceful and irresponsible laws ever passed.” The law gave to a non-elected official the power to simply cut the budget if the deficits didn’t shrink. This [cutting] was enacted without any legislative process. No hearings. No committee debate or vote. “Congress and the White House abandoned their political responsibilities for making fiscal decisions, and rushed instead to hand power to automatic, technical mechanisms,” wrote Ehrman. “This is hardly how republican institutions are intended to function.”

But during virtually the same period, Congress also enacted a tax reform measure that made substantial strides toward a simpler and fairer tax code. The measure demonstrated that “when politicians acted seriously, the political system was able to deal with complex issues quite well.” But this accomplishment was not the product of public virtue. “None of those in the process rose above party or personal interests.” The parties advocated their positions as hard as they could and then, recognizing that a winner-take-all attitude would fail, compromised. Tax reform was the system working just as the framers had designed it to. “In contrast, Gramm-Rudman was the product of panic and a desire to circumvent politics.”

The mention of “public virtue” touches a key theme in the book, that the genius of the Constitution is how it channels individuals’ and groups’ private self-interests into public virtue. The book’s first part recounts how the framers got there, as they learned from the flaws of the Articles of Confederation, the Constitution’s predecessor.

The Articles assumed that each state would itself act with public virtue, doing what was best for the national interest. But there was no enforcement mechanism, so individual states often pursued their own interests, conflicting with each other and the national interest. This system rarely and barely worked, to the point that during the Revolutionary War, the bickering states almost starved George Washington’s Continental Army.

As a result, the framers designed the Constitution to create public virtue without requiring it as an input. It would result from competition among self-interested groups and ideas. But instead of “winner take all,” the system would force deliberation and compromise among competing factions—thereby producing a best-possible consensus given the conflicting aims.

The authors demonstrate that the Constitution itself was the result of such a competitive, give-and-take process. This process did not produce the exact result that any side wanted at the time. But more than 200 years later, we can say that it obviously worked well.

Which brings us back to where we started. The authors are concerned that the American people and their representatives are losing touch with why our system has worked so well. The people see government gridlock and assume the system is broken. But what’s broken is the way the participants are playing—or not playing—their institutional roles.

In their most recent examples, the authors blame the second Bush administration for its ambitious moves to expand executive power without congressional—and in some cases judicial—oversight. In addition, from 2001 to 2006, the Republican congressional leadership “operated as the president’s floor leaders in the Congress, rather than his separate and coequal partners in government.”

These moves led to a “winner take all” environment, which is counterproductive in two ways. If the opposition has the votes, there’s gridlock; if the majority can push legislation through without compromise (as the authors believe was the case with the second Bush administration’s Iraq policy), then checks and balances have been defeated.

To be clear, the authors do not expect politicians to ignore their party in favor of their institutional roles. But the authors make the point that the reverse has not—and should not—be the case either:

Senator Harry S. Truman investigated President Roosevelt’s administration, and Senator Lyndon B. Johnson investigated President Truman’s administration. It was a Republican Senator, Howard Baker, whose incessant questions crystallized the belief that a Republican president knew more about Watergate than he had told. And it was a Democratic Senator, Daniel Patrick Moynihan, who blocked the Democratic president and his wife from their plan to overhaul American health care.

So we find the framers did not entirely succeed in removing the need for public virtue as an input to the political process. A willingness to play one’s institutional role—especially at critical times, when that role and one’s other interests may conflict—is necessary for the Constitution’s system to work. The authors call this our Constitutional Conscience, and they want American politics to get more Constitutionally Conscientious.

For those with further interest, here is a link to the book. The first part, which I glossed over, is worth the price of admission alone. It recounts how the Constitution emerged from a historic combination of big-thinking, practical problem-solving, and shrewd politicking. The book then examines how the Constitution adapted to the massive changes and challenges that came with America’s growth. Finally, it considers the recent challenges highlighted above, which the authors feel are novel and serious.

Saturday, December 15, 2007

Mortenson and Relin’s Three Cups of Tea

Don’t take my word for it. On, Three Cups of Tea has 482 user reviews, 91% of which are five out of five stars. It’s one of the few books I’d recommend to anyone, no caveats.

Three Cups of Tea is the true story of an unlikely saint, an American who against all odds builds schools in Pakistan.

Here is a good summary from Publishers Weekly:

Some failures lead to phenomenal successes, and this American nurse’s unsuccessful attempt to climb K2, the world’s second tallest mountain, is one of them. Dangerously ill when he finished his climb in 1993, [Greg] Mortenson was sheltered for seven weeks by the small Pakistani village of Korphe; in return, he promised to build the impoverished town’s first school, a project that grew into the Central Asia Institute, which has since constructed more than 50 schools across rural Pakistan and Afghanistan. Coauthor [David Oliver] Relin recounts Mortenson’s efforts in fascinating detail, presenting compelling portraits of the village elders, con artists, philanthropists, mujahideen, Taliban officials, ambitious school girls and upright Muslims Mortenson met along the way. As the book moves into the post-9/11 world, Mortenson and Relin argue that the United States must fight Islamic extremism in the region through collaborative efforts to alleviate poverty and improve access to education, especially for girls. Captivating and suspenseful, with engrossing accounts of both hostilities and unlikely friendships, this book will win many readers’ hearts.

Just to clarify the extremes of the human spirit, and human condition, that are involved here, a few notes:

  • At the time of his K2 near-death experience, Mortenson was the mountain-climbing equivalent of a ski bum, intermittently working as an emergency room nurse to fund his next climb. When he returned to Berkeley, California, to fulfill his promise to build the Korphe school, he had no money, no contacts, and no idea what to do. To save money while raising funds, he lived in the back of a Buick.
  • Korphe is a place where central heating is a yak-dung fire in the middle of a room made from rock and mud. When Mortenson got there, it not only had no school, the nearest doctor was a week’s walk away. One out of three Korphe children died before reaching their first birthday. The basic medicines in Mortenson’s first-aid kit and his training as a nurse were like godsends—one of many examples where only a little technology and know-how could alleviate much suffering. Later, Mortenson was involved in a simple clean-water project that halved the infant-mortality rate of a 2,000-person community.
  • Even for the healthy, life in Korphe was a constant struggle amid few resources. For example, during a rare celebration in which a ram was slaughtered, “forty people tore every scrape of roasted meat from the skinny animal’s bones, then cracked open the bones themselves with rocks, stripping the marrow with their teeth.”
  • “Traveling with a party of [Korphe] men hunting to eat, rather than Westerners aiming for summits with more complicated motives, Mortenson saw this wilderness of ice with new eyes. It was no wonder the great peaks of the Himalaya had remained unconquered until the mid-twentieth century. For millenia, the people who lived closest to the mountains never considered attempting such a thing. Scratching out enough food and warmth to survive on the roof of the world took all of one’s energy.” (An interesting counterpoint to the famous phrase, “Why climb the mountain? Because it is there.”)
  • About his focus on girls’ education: “‘Once you educate the boys, they tend to leave the villages and go search for work in the cities,’ Mortenson explains. ‘But the girls stay home, become leaders in the community, and pass on what they’ve learned. If you really want to change the culture, to empower women, to improve basic hygiene and health care, and fight high rates of infant mortality, the answer is to educate girls.’”
  • Mortenson on the War on Terror: “If we try to resolve terrorism with military might and nothing else, then we will be no safer than we were before 9/11. If we truly want peace for our children, we need to understand that this is a war that will ultimately be won by books not bombs.

If you wish to buy Three Cups of Tea online, take an extra click and go to the Three Cups of Tea site, where you’ll see a link to buy the book at Clicking the link and then buying will get Mortenson’s organization, the Central Asia Institute, up to 7% of the sale.

Monday, December 10, 2007

ExactChoice Epilogue: New CNET Recommenders

Over the past several months, CNET has been activating product recommenders in a variety of categories. Included are cell phones, digital camcorders, HDTVs, laptops, MP3 players, and printers.

Those familiar with the ExactChoice Recommender will recognize a lot of ExactChoice in these new recommenders. Indeed, the fact they exist is largely due to the success of the ExactChoice Recommender when it ran within the CNET Reviews site. That run was always seen as a precursor to the real integration, which would include more categories, tighter integration with CNET editorial, and technical integration with the CNET data platform.

These new recommenders represent the real integration—or perhaps the better word is reinvention. The new recommenders are literally a new generation of recommender after ExactChoice.

Although I kibitzed from my lair in the data side of CNET (CNET Channel), the real work was done by a team in the media side of the CNET brand. They did a great job of adapting the ExactChoice concept to CNET’s infrastructure and strengths.

In particular, the new recommenders leverage CNET’s expert editors to ask and explain the right questions. For example, note the integration of advice within the choices (especially the second one) for this question from the digital camcorders recommender.

Also, unlike the original ExactChoice Recommender, the new recommenders have an additional level of qualification for products: The results are sorted by editor’s rating, based on hands-on product reviews. So if you end up with many products that fit your needs, the editor’s rating is a useful tie-breaker. You can also sort on price, if that’s more important to you.

As an example, here is the start page for the digital camcorders recommender. The lower part of the page also has links to the other CNET recommenders.

Thanks to all those at CNET who made these new recommenders happen!

Saturday, December 8, 2007

XTC’s Go 2: “This is a Record Cover.”

I recently ran across an all-time favorite album cover, XTC’s Go 2. It’s from 1978, and although the album’s music has not stood the test of time, the cover’s consumer postmodernism (or should that be postmodern consumerism?) has.

Below is the cover image, and for your reading convenience, further below is a larger version of the text (both adapted from Wikipedia’s Go 2 page). Enjoy.

This is a RECORD COVER. This writing is the DESIGN upon the record cover. The DESIGN is to help SELL the record. We hope to draw your attention to it and encourage you to pick it up. When you have done that maybe you’ll be persuaded to listen to the music - in this case XTC’s Go 2 album. Then we want you to BUY it. The idea being that the more of you that buy this record the more money Virgin Records, the manager Ian Reid and XTC themselves will make. To the aforementioned this is known as PLEASURE. A good cover DESIGN is one that attracts more buyers and gives more pleasure. This writing is trying to pull you in much like an eye-catching picture. It is designed to get you to READ IT. This is called luring the VICTIM, and you are the VICTIM. But if you have a free mind you should STOP READING NOW! because all we are attempting to do is to get you to read on. Yet this is a DOUBLE BIND because if you indeed stop you’ll be doing what we tell you, and if you read on you’ll be doing what we’ve wanted all along. And the more you read on the more you’re falling for this simple device of telling you exactly how a good commercial design works. They’re TRICKS and this is the worst TRICK of all since it’s describing the TRICK whilst trying to TRICK you, and if you’ve read this far then you’re TRICKED but you wouldn’t have known this unless you’d read this far. At least we’re telling you directly instead of seducing you with a beautiful or haunting visual that may never tell you. We’re letting you know that you ought to buy this record because in essence it’s a PRODUCT and PRODUCTS are to be consumed and you are a consumer and this is a good PRODUCT. We could have written the band’s name in special lettering so that it stood out and you’d see it before you’d read any of this writing and possibly have bought it anyway. What we are really suggesting is that you are FOOLISH to buy or not buy an album merely as a consequence of the design on its cover. This is a con because if you agree then you’ll probably like this writing - which is the cover design - and hence the album inside. But we’ve just warned you against that. The con is a con. A good cover design could be considered as one that gets you to buy the record, but that never actually happens to YOU because YOU know it’s just a design for the cover. And this is the RECORD COVER.

Again, this is not an album recommendation, unless you want to hang the cover on your wall. Go 2’s music pales in comparison to XTC classics such as Black Sea (1980), English Settlement (1982), The Big Express (1984), and Skylarking (1986). So if you want to explore the rich musical offerings of XTC (“one of the smartest—and catchiest—British pop bands to emerge from the punk and new wave explosion of the late ‘70s,” according to All Music Guide), start with those instead.

Monday, November 26, 2007

The Unmeasured Medium of Meetings

If you’ve ever found yourself in a meeting where the most interesting thing to do is silently calculate the cost of the meeting to your company, this is for you: Payscale’s Meeting Miser allows you to enter your company’s city, then the job titles of those people present. Click the start button and watch the dollars add up in real time.

It’s a free tool, provided somewhat tongue-in-cheek. Yet Meeting Miser strikes a chord because the internal business meeting is a largely unmeasured and unaccountable medium. Whereas you need to complete an expense report to buy a $35 toner cartridge for the office laser printer, you can blow $500 of people’s time in a meeting at will. Of course, it’s harder to measure the return on investment of a particular meeting than it is to justify the toner cartridge, but does that make it not worth trying?

In a world where you rarely hear the complaint “I was in too few meetings today,” perhaps a more serious version of Meeting Miser—integrated into a company’s scheduling and human-resources systems—would be an interesting experiment.

Saturday, November 3, 2007

Vampires versus Math

In an act of monster-slaying unlikely to make the movies or TV, physicists Costas J. Efthimiou and Sohang Ghandi show mathematically why vampires do not exist.

Their thesis:

Anyone who has seen John Carpenter’s Vampires, Dracula, Blade, or any other vampire film is already quite familiar with the vampire legend. The vampire needs to feed on human blood. After one has stuck his fangs into your neck and sucked you dry, you turn into a vampire yourself and carry on the blood-sucking legacy. The fact of the matter is, if vampires truly feed with even a tiny fraction of the frequency that they are depicted as doing in the movies and folklore, then humanity would have been wiped out quite quickly after the first vampire appeared.

The math is simple. Every time a vampire bites a human, the human becomes a vampire, reducing the human population by one and increasing the vampire population by one.

Let’s say there are 99 humans and 1 vampire. The vampire claims its first victim. Now there are 2 vampires and 98 humans.

The two vampires each claim a new victim. That would make 4 vampires and 96 humans. The four vampires each claim a new victim, leaving 8 vampires and 92 humans.

Because the number of vampires doubles at each step, the vampires eliminate all the original 99 humans three steps later.

What if we started with 1 vampire and 99,999 humans? It would take 18 steps to eliminate all the humans. What about 999,999 humans? Only 21 steps.

The authors provide a scenario in which the first vampire appeared in 1600, and each vampire claimed one victim a month. The world population at the time would have been vampirized in less than three years.

A few comments:

  • The authors conveniently assert the year 1600’s total population (humans plus one vampire) to be a number exactly in the 2^n series (536,870,912, which is 2^29). This enables a tidy last step where 268,435,456 vampires have 268,435,456 victims.
  • The authors define vampires by way of the movies. However, the authors do not model the fact that, in the movies, the humans usually fight back and vanquish the vampires. If we’re using movies as the guide, perhaps this better explains why vampires do not exist. ;)
  • Where did that first vampire come from?

If you read the full article, the vampire section is half-way down the page, under the subheading “Vampires.”

Monday, October 22, 2007

Review: Shaffer’s How Computer Games Help Children Learn

Part critique, part proposal, David Williamson Shaffer’s How Computer Games Help Children Learn is about what kids need to learn and how they need to learn it.

Shaffer asserts that the common practice of school is already a kind of game, with rules rooted in the industrial revolution:

School is a game about thinking like a factory worker. It is a game with an epistemology of right and wrong answers in which Students are supposed to follow instructions, whether they make sense in the moment or not. Truth is whatever the teacher says is the right answer, and actions are justified based on appeal to authority. School is a game in which what it means to know something is to be able to answer specific kinds of questions on specific kinds of tests.

Back when a high-school degree led to a good manufacturing job, this version of school may have made sense. However, U.S. manufacturing jobs are somewhere between going and gone. That leaves the low-end service sector as the primary employer of the high-school-educated workforce. Thus, yesterday’s middle-class auto worker is today’s barely-getting-by burger flipper.

In Shaffer’s view, the only good jobs will increasingly be those beyond the traditional high-school education: jobs that address problems with many possible answers, that require creativity, and that reward innovation. Everything else will be automated, offshored, or marginalized to the low-wage economy.

So if society wants to leave no child behind, the education system needs to change its game. The key shift is from an emphasis on teaching facts to teaching skills. Put another way, Shaffer wants more learning by doing. He wants students to learn by making them participants in simulations of real-world challenges that engineers, urban planners, journalists, and other professionals face. And this is where computers come in.

Think about SimCity, the computer game where you manage a simulated city as it grows. It’s a simulation. It’s a computer game. It’s a learning experience:

[Players] see what happens when they make changes in urban ecosystems. For example, if you put more parks in a city, the cost of public utilities goes up because you have to keep the parks clean. If you put an industrial site next to residential housing, the residential land values fall and the crime rates rise. As a result, players must decide whether to raise taxes, decrease the green space, move the industry, or risk urban flight—or, more realistically, decide which combination of these choices and in what measure will lead to the best long-term outcomes for the city.

Immersing kids in such a world lets them learn key concepts and ways of thinking as they progress through the game. Succeeding in the game requires learning and understanding. Contrast that with reading a bunch of articles about urban planning and being tested on the facts. Will those facts ever mean anything to the students? How much more would a student want to read about urban planning after getting hooked on SimCity?

But Shaffer’s learning games (developed by him and his colleagues at the University of Wisconsin-Madison) are different than SimCity. Whereas SimCity is an imaginary place, Shaffer’s games take care to simulate real places, things, processes, and constraints from the real world. The computer simulation provides an extension of reality rather than a replacement.

For example, an urban-planning game he highlights... modeled on the real world of the city players live in and the real work of planners who shape that city. Players are redesigning a city, but it is their city. They can see and touch the places they are redesigning and can see how those changes might make their lives and the lives of those around them richer and more satisfying. However, their choices are constrained by the economic, social, and physical realities of life in a city and by the norms and practices of the profession of urban planning.

Given this world, students learn key skills necessary to do urban planning in the real world. These include ways of thinking and talking about problems that apply well beyond urban planning—which is the larger point: The underlying creativity, critical thinking, and innovation are what these students will need to compete in the global marketplace, whether they become urban planners or anything else that pays more than subsistence wages.

Of course, this all sounds reasonable, but the games need to be good. SimCity was successful because of fantastic game design and execution, not because millions of consumers inherently craved a city simulator. So can a bunch of academics make something that both satisfies Shaffer’s educational vision and is compelling enough to keep kids at the screen?

Shaffer and colleagues’ games are at early stages and are themselves academic projects. Initial pilot results are promising, but the results are from tests with close participation from mentors and facilitators. While the human element is part of Shaffer’s design, how much mentoring and facilitating is necessary for success? Is that amount practical for large-scale use?

Finally, how do Shaffer’s games get integrated into schools? Here Shaffer has a surprising answer: Maybe they don’t, at least initially. Several of the pilot tests have been in after-school programs. He speculates that they could also be embedded in larger virtual worlds such as Second Life. Although some might see this as a cop-out, avoiding the main institution that Shaffer critiques, I see it as a practical path that brings change where it’s easiest to make change. If the change is good, it will spread.

As is probably apparent, I like Shaffer’s ideas. But if you want a good read along with good ideas, this book might not meet your standards. Although the prose is relatively direct, it carries two burdens. First, Shaffer often tries to use and/or explain his field’s jargon, which I’ve spared you because I found it a distraction as a general reader. Second, Shaffer spends many pages describing computer games using words and the occasional picture, yet the thing that makes the games compelling is their visual and interactive appeal. If the book’s content could be presented as a short documentary film or even a screencast, it would be far more likely to hold a general audience’s attention.

(This suggests an accompanying book: How Videos of Children Learning from Computer Games Help Adults Learn the Value of Children Learning from Computer Games. Or should it be a video?)

Bottom line: How Computer Games Help Children Learn gave me a fresh angle on today’s educational challenges while detailing the first steps of a promising way forward. For people already interested in the issues, the book will be well worthwhile. For anyone else that wants to put a toe in the water, check Shaffer’s Epistemic Games Web site, and if that gets you going, then get the book.

Saturday, October 13, 2007

Customer Service Everywhere

“In most corporate cultures, customer service is regarded as an afterthought and a cost center,” said Craig Newmark, explaining why great customer service is the exception, not the norm. Craig is the founder of craigslist, the world’s most popular forum for classified advertising. It handles 8 billion page requests per month across 450 craigslist sites, each covering a specific geographic region in all 50 U.S. states and more than 50 countries (more info at craigslist’s fact sheet). By his choice, Craig’s full-time job is customer service representative, addressing craigslist users’ questions, complaints, and problems.

Although almost any executive would agree with Craig’s advocacy for better customer service, I see few following Craig’s method of doing something about it, even part-time. To be clear, I am not counting executives who “talk to customers” by talking to executives at their biggest accounts. While that’s good and appropriate, it’s rarely the same as talking to the people who actually use your product.

An executive should talk to users directly because whether the executive’s product delivers value is determined at the point of use, not at a power lunch with another exec. It sounds obvious, yet so much of corporate and product strategy is based on assumptions about users by people who do not regularly engage with said users. In theory, “customer intelligence” percolates up the corporate hierarchy and/or is collected independently from customer surveys. In practice there’s no substitute for executives having ongoing, direct interactions with a representative sample of actual customers.

But let’s not restrict this to executives.

Engineers should talk to users directly because it makes problems real. For example, if the people in your organization who regularly talk to users can’t convince the engineer responsible that a problem matters, have the engineer talk to a couple users who have been burned by the problem. Your organization has an issue if you constantly need to invoke this, but on an occasional basis it can be just the right medicine.

Sales people should talk to users—after the sale has been made—to really understand how the product is used. That way, the salesperson can tell a story to prospects that is not just compelling but also realistic. “Realistic” is important because if expectations are set right, a whole class of customer-service problems disappears: those where Sales promised X, but after the product is bought and installed, Customer Service must now answer for the fact the product actually does Y.

Maybe it’s too easy for me to take this position, because most of my career has involved a founder-level role in start-ups and business units within larger companies. In the early days of any effort, founders tend to work directly with customers—the execs, the users, and anyone else that might matter—because no one is more qualified to do so (and/or because no one else is available to do so ;). Along the way, it’s natural to use this customer proximity to learn first-hand how to improve the product and the human processes that support it. These elements all fit together because a founder is often responsible for it all.

For bigger companies with established products, it’s different. There are whole organizations for customer service versus product development versus market/customer research. At one of my start-ups, Personify, we were well down that path when we reached the 100-employee mark. Although I still spent a significant amount of time directly with customers—and often with users specifically—it tended to be with the more challenging ones: the biggest/highest-stakes customers, the most creative customers that were exploring the product’s boundaries, and the most messed-up customers that needed turning around.

This sample was not representative. In retrospect, I think it biased my later product-design decisions in a way that favored the edges of the distribution, not the great middle.

Looking back, I had the advantage of having started from a position of being close to all the early customers. For an executive hired into an already established company, I suspect that the institutional barriers would be even higher to getting customer visibility that is first-hand and representative—short of taking Craig’s path and living on the support desk.

But that’s not a reason to avoid the issue. It’s just a warning to be deliberate rather than having your customer contact occur as a byproduct of something else you’re doing—at least if you’re planning on using what you learn to inform other decisions.

Finally, having so far highlighted the benefits of people outside the customer-service org doing customer service, it’s worth noting the reverse. People in Customer Service are great exports to other parts of the company, assuming you haven’t offshored all of Customer Service and thus permanently siloed those people. At Personify, people who started in Customer Service and then went onto other organizations (such as Business Development, Product Marketing, and Presales) were often better performers than their peers. I believe it was because they had a tangible sense for what the company and its product actually did—and did not do—from the customer’s perspective. We tried to explain it via presentations, documentation, training, and tag-alongs in meetings—all of which were no substitute for actually being there and doing it.

The moral of the story: When it comes to serving customers and particularly users, be there and do it. You don’t have to be Craig Newmark, but be more than the person who only knows customers as concepts.

[Update, 3/30/2009: I randomly ran across this BusinessWeek article about, which says: “To make sure that everyone at Amazon understands how customer service works, each employee, even [CEO Jeff] Bezos, spends two days on the service desk every two years. ‘It’s both fun and useful,’ says Bezos. ‘One call I took many years ago was from a customer who had bought 11 things from 11 sellers—and typed in the wrong shipping address.’”]

Sunday, September 30, 2007

Analytics That Explain Themselves

As computers have gotten more powerful, so too has the complexity of analytics they can do. But this power often brings a paradox: Complex and interesting analytics can go unused because few people know how to interpret the results.

When it happens, this failure is rarely due to the people. It’s usually due to a shortsighted view of analytics, a view that focuses on the underlying data processing at the expense of making the results understandable.

That’s the bad news. The good news is, computers can be used not just to “do” analytics but to explain them. For example, long ago at Personify, reports had a footer called “How to Read This Report.” It was a plain-English sentence that described the data using the top-left cell as an example. It was simple but effective.

The latest thing to remind me of this topic was one of the best display ads I’ve ever seen on the Internet. I was looking up Apple on a stock-quote site, and one of the ads was this:

The ad explains Apple’s current performance on a popular technical-analysis indicator for stocks. Traditionally, technical analyses are rendered as charts that require expertise to interpret. With this ad, Scottrade is demonstrating its SmartText technology that interprets the charts for you, using current data. (The reason I think it’s a great ad: Most ads promise something; this ad actually does it, in context.)

Would technical-analysis pros finds the explanations simplistic? Probably. But for the casual user, do the explanations begin to make sense of something that might be useful? I’d say yes.

I don’t follow the field of technical analysis for stocks, so I don’t know how unique or effective SmartText is—although it’s apparently unique enough to warrant its own ad campaign. What I do know is this: In business analytics the equivalent of SmartText’s functionality is a rarity. Analytics results that people don’t understand are not so rare.

In other words: Analytics, you’ve got some explaining to do.

Thursday, September 6, 2007

How to Hold a HiPPO

At a recent conference, I saw analytics blogger Avinash Kaushik talk about the dangers of the HiPPO: the Highest Paid Person’s Opinion. From his speaker’s notes for a similar talk:

I can’t say it any better, HiPPO’s rule the world, they over rule your data, they impose their opinions on you and your company customers, they think they know best (sometimes they do), their mere presence in a meeting prevents ideas from coming up. The solution to this problem is to depersonalize decision making, simply don’t make it about you or what you think. Go outside, get context from other places. Include external or internal benchmarks in your analysis. Get competitive data (we are at x% of zz metric and our competition is at x+9% of zz metric).

Be incessantly focussed on your company customers and dragging their voice to the table (for example via experimentation and testing or via open ended survey questions). Very few people, HiPPO’s included, can argue with a customer’s voice, the customer afterall is the queen / king! : )

Although Avinash’s advice was about overcoming a HiPPO gone wild, I started wondering about the other side of the story: What if your opinion is the HiPPO in the room? How do you be a good HiPPO holder?

Before you say, “This doesn’t apply to me,” remember that the HiPPO holder is a relative position. In a meeting with your boss, you may be just one of the team; but if you in turn lead a team, or if you are influential with peers, you hold a HiPPO in some situations.

Realizing when you are the HiPPO holder is important because it will keep you on guard against the traps of being a bad HiPPO holder, the kind that stifles ideas just by being present.

What are the traps?

It’s not just about politics. You pride yourself on being non-hierarchical, open, and politics-free—and thus assume when people agree with you it’s because you have the best ideas. But even if you have minimized the politics, it’s still less work for your people to agree with you than to do the spadework of collecting and analyzing data. If they don’t have the time or incentive to dig for their own answers, they may not. (Promote the ones who do anyway.)

Metrics can have conflicts of interest. For example, company X has a call center where the key metric is call duration (lower being better for the company’s costs). Because it’s harder to measure, the company does not systematically track customer satisfaction with calls. As a result, “metrics-driven” decisions about the call center inadvertently favor churn-and-burn customer service practices. The point: Focusing on metrics doesn’t relieve you of understanding whether bias is still at play. Your company’s key metrics probably reflect those in charge’s worldview, so if you are looking for out-of-the-box thinking, ask yourself if your metrics already have you in a box.

Don’t mistake a lively debate for a good decision process. Open exchange of ideas is good. But in your team, do you really know whether the best ideas win, or whether the best debaters win? It’s an especially important question if you pride yourself as one of the best debaters.

Those are a few traps I’ve observed. I’m sure there are many more to avoid, but I’ll conclude by suggesting a principle for HiPPO holders to embrace: For big decisions, the HiPPO holder should focus on process, not outcomes. So if you own the final decision, be like a judge: Limit yourself to establishing and enforcing a fair process, then decide only at the end, based on the evidence.

This is consistent with Avinash’s advice to depersonalize the process, but with a twist. He was assuming that a team should depersonalize the process to overcome the HiPPO’s biases. If instead you use your HiPPO influence to incent and enforce an objective process, everybody is further ahead.

Sunday, August 5, 2007

Highway Exit Numbers: When Simple Was Too Simple

Originally, exit numbers were sequential. So if a highway had 200 exits, they would be numbered 1 to 200. From the motorist’s point of view, it could not have been simpler.

But when a sequentially numbered highway needed a new exit, the scheme’s simplicity went from asset to liability. For example, if a new exit was being added between exits 20 and 21, would the former exit 21 become 22? And would the former 22 become 23? And so on. Given the cost in dollars and confusion, renumbering all the exits from a new exit forward was not an option.

What to do? “Hmmmm, a new exit between 20 and 21....Let’s call it 21A!” And so it was, until another exit was needed between 21 and 21A. On the New York State Thruway, the solution was to go with the sequence 21, 21B, 21A, 22. (If you are asking yourself whether you read that correctly, you did.)

This kind of situation led to the rise of distance-based numbering, where the exit numbers are the same as the highway’s mile markers. Example: Exit 21 would be 21 miles from the highway’s numbering origin. This handles the problem of new exits better than the sequential scheme because a new exit will usually have an unused number waiting for it.

But what if a new exit is within the same mile marker as an existing exit? From the Wikipedia article:

If two exits would end up with the same number, the numbers are sometimes modified slightly; this is often impossible and exits are given sequential or directional suffixes, just as with sequential numbers.

So in a worst-case scenario, exits that are too close together get sequential numbers or letters tacked-on to their distance-based numbers. This is still better than the original sequential-numbering scheme, because the ugliness involved with changing sequential numbers is contained to a few exits at a time.

However, the distance-based system does have an Achilles’ Heel: If a highway’s origin changes, all the mile and exit-number signs would need to change. Although this situation would be as bad or worse than a renumbering caused by a new exit in a sequential-numbering system, it is far less likely. That’s because, out of all the places a new exit can be, it needs to be at the beginning of the highway to disrupt the distance-based system. In contrast, a new exit any place other than the end of the highway would disrupt the sequential system.

So, distance-based numbering is not a perfect solution, but it is a better one. Sequential numbering’s simplicity assumed no change to the system being numbered. When that assumption was violated, sequential numbering proved too simple to adapt. As a result, most U.S. states now use distance-based numbering for interstate highways.

[The image is from Wikipedia’s “Exit number” article, as are other examples below except where noted.]

Saturday, July 21, 2007

Let It Roll, Baby, Roll

It is said that the ancient Central American Olmec culture invented the wheel for children’s toys but never made the jump to using it for transporting things. By today’s standards, such an important oversight seems difficult to imagine, but our modern society has its own versions of this story.

If you are a frequent traveler, you almost certainly have luggage with wheels. Compared to carrying your luggage, rolling it is a big advance. When did this advance occur?

From eBags’ Bagopedia:

Although the wheel dates back to pre-history, modern rolling luggage did not appear on the scene until around 1989. The story goes that Northwest airline pilot Bob Plath was tired of lugging his heavy overnight bag and flight bag through airports around the world. Being a creative kind of guy, Plath spent weekends working on a wheeled “pilot” bag in his garage. The new wheeled bag was an immediate success. Whenever and wherever Plath’s wheeled luggage rolled on the scene, everyone wanted one. Bob Plath’s company TravelPro was born and the rest is history. Before too long, TravelPro held 15 patents on a diverse line of rolling luggage. Other luggage companies quickly caught on and went wheeled.

“Alright,” you say, “our age of innovation didn’t notice this one obvious application of the wheel for a while, but surely it is an anomaly.”

No. I can testify to the next example, because our family is apparently an early adopter of the wheeled car seat: It’s like rollable luggage, except the “luggage” is your car seat, which attaches to a roller frame, like so:

The picture is the gogo Kidz Travelmate from GogoBabyz. I don’t know who the child is.

The scenario where a wheeled car seat applies is this: You are flying somewhere with a small child, and you will be driving at your destination. Normally, you would carry a car seat as a piece of luggage (or as carry-on if your child will be sitting in it on the plane). When added to the multitude of things you need to carry in support of a small child, a bulky car seat is not a welcome addition.

By adding the roller wheels to your car seat, you not only get the roller effect on the car seat, you can also roll your child in the seat. We have rolled our daughter through numerous airports, and similar to the story above about people stopping the roller bag inventor wherever he went, people always ask where we got it or comment on how clever an idea it is.

So, even today, a technology as fundamental as the wheel is still spinning-out new uses.

Tuesday, July 10, 2007

The Pleasant Mystery of the Perfect Cut

Last time, I talked about The Conversations: Walter Murch and the Art of Editing Film by Michael Ondaatje. Following is a final topic from the book that resonated with my background in electronic-music composition and audio engineering. (I studied those subjects in college. They ended up being a path not taken in my life, although still areas of interest.)

Back in the day, if I was deciding how to bring musical elements together, I found the best results always had a mystery to them. For some reason, things just clicked—neither by accident (it takes a lot of technique to create the conditions for things to click) nor by a fully analyzable formula.

On this subject, Murch drills the bullseye straight through:

To determine [where to make a cut in a scene], I look at the shot intently. It’s running along, and then at a certain point I flinch—it’s almost an involuntary flinch, an equivalent of a blink. That flinch point is where the shot will end....

The key, on an operational level, is that I have to be able to duplicate that flinch point, exactly, at least two times in a row. So I run the shot once and hit a mark. Then run it back, look at it, and flinch again. Now I’m able to compare. Where did I stop the first time, and where did I stop the second? If I hit exactly the same frame both times, that’s proof to me that there is something organically true about that moment. It’s absolutely impossible to do that by a conscious decision. Imagine—there are twenty-four targets going by every second and with your gun you have to hit [exactly the same one].

Why that works is one of the pleasant mysteries in life.

Saturday, June 30, 2007

Walter Murch and the Long View of Film

Walter Murch was the film editor and/or sound mixer for American Graffiti, Apocalypse Now, Ghost, the Godfather, and Cold Mountain, to name a few movies you might know. I know him from a book, The Conversations: Walter Murch and the Art of Editing Film by Michael Ondaatje (author of The English Patient, the film version of which Murch edited).

The book is a series of conversations between Ondaatje and Murch about filmmaking: the techniques, stories, and people behind the scenes. Along the way, we get two Renaissance Men’s worth of eclectic digressions and connections, weaved together by Ondaajte in his role as editor of the text.

To give you a taste, I’ll excerpt from two passages that interested me because of Murch’s “long view” perspective on film’s development as an art form.

We look at ancient Egyptian painting today and may find it slightly comic, but what the Egyptians were trying to do with the figure was reveal the various aspects of the person’s body in the most characteristic aspect. The face is in profile because that reveals the most about the person’s face, but the shoulders are not in profile, they’re facing the viewer, because that’s the most revealing angle for the shoulders. The hips are not in profile, but the feet are. It gives a strange, twisted effect, but it was natural for the Egyptians. They were painting essences, and in order to paint an essence you have to paint it from its most characteristic angle. So they would simply combine the various characteristic essences of the human body....

That’s exactly what we do in film, except that instead of the body of the person, it’s the work itself. The director chooses the most characteristic, revealing, interesting angle for every situation and every line of dialogue and every scene....It may be, five hundred years from now, when people see films from our era, they’ll seem “Egyptian” in a strange way. Here we are, cutting between different angles to achieve the most interesting, characteristic, revealing lens and camera angle for every situation. That may appear perfectly normal to us, but people 500 years from now may find it strange or comic.

If that sounds unlikely, think an eventual future where “film” = holodeck.

On to the second passage:

I think cinema is perhaps now where music was before musical notation—writing music as a sequence of marks on paper—was invented. Music had been a crucial part of human culture for thousands of years, but there had been no way to write it down. Its perpetuation depended on an oral culture, the way literature’s did in Homeric days. But when modern musical notation was invented, in the eleventh century, it opened up the underlying mathematics of music, and made that mathematics emotionally accessible. You could easily manipulate the musical structure on parchment and it would produce startlingly sophisticated emotional effects when it was played. And this in turn opened up the concept of polyphony—multiple musical lines playing at the same time. Then, with the general acceptance of the mathematically determined even-tempered scale in the mid-eighteenth century, music really took off. Complex and emotional changes of key became possible across the tonal spectrum. And that unleashed all the music of the late eighteenth and the nineteenth centuries: Mozart, Beethoven, Mendelssohn, Berlioz, Brahms, Mahler!

I like to think cinema is stumbling around in the “pre-notation” phase of its history. We’re still doing it all by the seat of our pants. Not that we haven’t made wonderful things. But if you compare music in the twelfth century with music in the eighteenth century, you can clearly sense a difference of several orders of magnitude in technical and emotional development, and this was all made possible by the ability to write music on paper. Whether we will ever be able to write anything like cinematic notation, I don’t know. But it’s interesting to think about.

While these excerpts typify Murch’s erudition, they are more abstract than most of the book, which often is about how specific scenes in movies achieved their affect: how a distant, quiet sound ended up being more powerful than a layered mass of loud sounds in George Lucas’ first feature film, THX 1138; how the framing of a scene in The Godfather tells the audience the character is lying; how a specific technique for recording and mixing crickets led to a “hyperreal” soundscape in Apocalypse Now; and so on.

If you read the book, you will not only know more about what makes movies tick, you’ll also feel like you know Walter Murch. And that’s a good thing.

Thursday, June 21, 2007

Bring on the New Magic

Fill in this blank: A computer desktop is to a real desktop like a Google search is to ________?

It’s hard because the computer desktop metaphor was meant to be literally like a real-world desktop, with files arranged into folders and such. By contrast, we have nothing in the real world like the modern Internet search experience, with its single-line interface that returns answers to seemingly everything.

Why does this distinction matter? Think of how the desktop metaphor and graphical user interface changed computing, and now consider that it’s happening again. Increasingly, what people do with computers is straining and, in some respects, bypassing the desktop metaphor. For example, when you start your computer, how often do you open folders and files versus going straight to a Web browser? And in that Web browser, how often do you immediately do a search?

I bring this up because Tim Oren recently raised these issues, referencing Randy Smith’s idea of the tension between “literalism” and “magic” in user interfaces:

The original desktop design leaned in the direction of literalism. While the allusion to reality was never pure (trashcans on desktops?) the generally one-to-one correspondence between user action and resulting change inside the system put it squarely into the direct manipulation class of literalist designs. This literalism is also a large cause of the failure of the desktop design to scale, as the user is responsible for acting to create and maintain useful organization of ever-growing collections of information. Contrast the wildly successful - and almost completely ‘magical’ - interface of Google. There is no real world act equivalent to typing a few words and receiving in return lists of information from any place in the global Web....The overwhelming acceptance by end users of an interface devoid of literalist elements is a quiet and widely overlooked revolution of the last decade, and its implications are largely unexplored.

I agree, and it will only accelerate as the distinction between what’s on your computer versus what’s on the Internet becomes less meaningful. Much of what used to reside on my computer’s hard disk now resides in the Internet “cloud” on various services. And for the stuff that stays on my computer, desktop search (ironically, often accessed via a Web browser toolbar) is increasingly an alternative to navigating through folders and files.

Yet the traditional Google search only solves a certain class of problems, especially those where you know what you’re looking for. What other “magical” approaches are there to the things Google doesn’t do? It’s like asking, circa 1985, what else can you do with a graphical user interface other than help people run an operating system?

Finally, this topic is not just about computers as traditionally defined. For example, the literalist approach to a TV user interface—the scrolling-grid “electronic program guide”—gets evermore useless as choice expands. If you think it’s bad scrolling through 500 channels of cable offerings, how about “all video on the Internet”?

So, as Tim says, the literalism/magic angle is an interesting one for considering a change that’s afoot—in what we do, and how we do it—with things digital. Bring on the new magic!

Tuesday, June 12, 2007

Yield Management for Metered Street Parking

What if parking meters were priced more like airline seats?

The backstory: “Yield management” is what most airlines do when they sell you a seat. The price you pay might be different from what it was yesterday, or will be an hour from now. It depends primarily on the current and expected demand for the seat (usually, the seating class) you want.

By increasing or decreasing prices with demand, the airline can maximize the revenue from a flight’s inventory of seats. The goal is to avoid empty seats that generate no revenue while getting the highest rate possible on filled seats. Because many different, constantly changing factors are involved, managing the yield is a complex task.

With that intro, let’s transition from airline seats to metered street parking. Like airline seats, metered parking spaces are perishable goods: If they go unused, the potential revenue is lost. Also, as a flight has a limited number of seats, a geographic area has a limited number of metered parking spaces.

Cities raise revenue from parking meters and thus have incentive to manage the yield upward. However, the usual rule is one price fits all. Some cities have different prices in different areas, but that’s a long way from active yield management.

A major obstacle has been the the traditional parking meter. The closest it comes to measuring demand is a coin count when the meter is emptied. But even if it could continuously measure demand (“Hey, my space has been empty 35 minutes!”), the traditional meter does not have a way to adjust its pricing automatically.

Enter new technologies. Today, digital meters exist that can change pricing depending on the day and time. Also, a variety of technologies exist to detect when a car enters and leaves a parking space; as a result, demand is measurable not just by meter but by day of the week, time of day, and so on.

Using such new technologies, the Port of San Francisco recently ran a test to understand how its meters were being used. The test involved multiple vendors of next-generation parking meters, with measurement by Streetline Networks, a San Francisco start-up. Streetline has a wireless sensor system that tracks when cars come and go from spaces.

Here is an example of data collected by Streetline at a particular meter:

The graphic shows December 2006 metered hours (in gray), occupied hours (in blue) and paid hours (in red) for meters along the even side of 200 Embarcadero, broken out by day of the week.

Among the findings of the study:

  • Demand varied significantly at the same meter during the day, often predictably so. (In the graphic above, note how uneven demand is within each day yet similar across weekdays.)
  • Demand could vary widely on a block-to-block basis.
  • Higher pricing did not affect usage. Meters priced at $3 an hour were used at the same rate as when they were priced at $2 an hour.

The test’s one attempt at varying prices involved progressive pricing. Meters were $3 for the first two hours, $4 for the third hour, and $5 for the fourth hour. The idea was, instead of having parking cops enforce a two-hour limit, let the pricing system enforce it by making people pay more the longer they stay. (If you’re trying to imagine who would pump $15 worth of quarters into a meter, you’ll be relieved to know the test allowed payment by credit card.)

However, progressive pricing was a poor tool for managing yield at peak times, such as at lunch. Quoting from the minutes of a Port Commission meeting where the findings were presented:

What makes it a peak is that most people arrive just before it and most people leave just after it. What you end up having with a progressive rate system is that most people pay the lower rate during the highest usage hour. People [staying] a little longer ended up paying higher rates when demand is lower. This is the opposite of what you want to see if you’re trying to balance usage over the day.

Going forward, the Port of San Francisco will be trying other pricing policies:

Block-by-block, there’s a huge variation in demand for on-street parking which means that we need to have pricing policies that are more specific to a specific block or a geographic areas as opposed to Portwide pricing....

When people are parking in the middle of the day between 11a.m.-2p.m and the pricing is just for two hours and the third hour and the fourth hour. We learned that if we want to deal with congestion we have to deal with time-of-day pricing as opposed to straight per hour rate.

In other words, they need to price parking meters more like airline seats. Other thoughts: Vary the prices of the meters near the ballpark around game time. Allow longer time limits in areas/times with low demand.

These pricing schemes are not as dynamic as airline-seat pricing, but they go a significant distance in that direction. Because parking meters are not reserved ahead of time, and checking the price requires some form of stopping, there are practical limits to how dynamic the pricing can be, notwithstanding future visions of parking meters auctioning their spaces wirelessly to cars cruising the area.

You may ask, is this all a good thing? No one likes paying more for parking, and unlike with airlines and airline seats, a city has a monopoly on metered parking spaces. What is in the public interest?

I don’t have a definitive answer, but from time to time I’ve noticed the work of Donald Shoup, a professor at UCLA who specializes in public policy related to parking. Having studied the subject for decades, he says metered parking is usually underpriced, and for that matter, there’s too much free street parking. In this New York Times opinion piece, he makes his case.

Independent of the public-policy debate, it’s safe to say that elements of yield management will increasingly apply to metered parking, simply because it’s now feasible and thus will be tried. Per the old adage “If you can’t measure it, you can’t manage it,” I expect cities to find that if they can actively measure street-parking demand, they can manage pricing better for a wide range of public-policy goals.

Monday, May 28, 2007

Issues, Character, and Tuchman’s The March of Folly

With presidential elections, it is often said that character trumps issues. For example, from the 3/11/2007 USA Today:

A new Associated Press-Ipsos poll says 55% of those surveyed consider honesty, integrity and other values of character the most important qualities they look for in a presidential candidate.

Just one-third look first to candidates’ stances on issues; even fewer focus foremost on leadership traits, experience or intelligence.

Whenever I see a “character trumps issues” story, I feel like We the People are taking the easy way out. Compared to understanding the issues, deciding which candidate seems more honest is easy—not necessarily easy to get right, but easy in terms of effort required.

I’ve always assumed that was a bad thing. But having read The March of Folly by historian Barbara Tuchman, I may reconsider. Written in 1984, The March of Folly examines why various rulers and governments throughout history have pursued policies that were obviously counterproductive, not just to historians but to observers in their time.

The short answer is: leaders with bad character—specifically, corruption, misguided ambition driven by ego, and obliviousness to reality when the facts challenged an existing course of action. Near the end of the book, Tuchman says, “Aware of the controlling power of ambition, corruption and emotion, it may be that in the search for wiser government we should look for the test of character first.”

I don’t take this to mean, “ignore the issues.” However, in a modern democracy, most mainstream positions on issues average-out to something workable over time, as one side wins for a while and then the other. Most important to Tuchman is avoiding the occasional but disastrous policy or institution that goes uncorrected despite widely understood flaws and viable alternatives. In her view, the true disasters have historically been, and will continue to be, driven by leadership failures involving character. So when character differences between candidates are significant, she might literally want the best man or woman to win, as opposed to who she agrees with most.

That said, it’s worth noting that Tuchman’s key character flaws involve power’s corrupting and delusional influences. She is less concerned about personal vices or, for that matter, virtues such as heroism demonstrated in war. On this point she probably differs with some percentage of the poll respondents. But even if she gets there for different reasons, Tuchman’s version of “character first” is an interesting way to think about—and perhaps feel better about—what the majority of voters apparently do.

Monday, May 14, 2007

Outstanding in the Field

You may ask, “Why is there a long table and chairs set up in the middle of that field?” The answer is both a story about innovation and an unusual restaurant recommendation.

Jim Denaven was the chef of Gabriella Cafe in Santa Cruz, California. He invited farmers to collaborate with him on special meals at the restaurant, featuring fresh-picked produce straight off the farm. These events were hits.

By Northern California standards, doing restaurant meals with featured farmers was mildly innovative. But then Denaven went a major step further: Having taken farmers to the restaurant, Denaven decided to take the restaurant to the farm.

He created Outstanding in the Field, an event that occurs throughout the summer and fall at various organic farms in the United States and Canada. A group of people get to tour an organic farm, culminating in dinner amid the fields. The food is prepared by a notable chef, using ingredients straight off the farm.

Jacqueline and I attended an Outstanding in the Field event a few years ago at Knoll Farms in Brentwood, California (about 60 miles east of San Francisco). It was fig season, and farmer Rick Knoll took us and a group of perhaps 50 others on a tour of the grounds, picking and eating ripe figs off the trees.

Later, we had dinner on a long table like the one in the picture, albeit shielded from the warm evening wind by parallel rows of fig trees on either side. The chef was from San Francisco’s Fringale. It was a five-course meal, with different wine tastings at each course.

Among other things, the meal included the most explosively flavorful tomatoes I have ever experienced. It was the taste equivalent of super-saturated color.

When the sun was gone, a chain of paper lanterns was the only light source amid the otherwise dark fields, the sky dense with stars.

Just being near where food originates (no, not the grocery store), if even for a short while, dining in/with/amid nature—it was a good thing.

Take a look at this year’s Outstanding in the Field schedule. Perhaps there will be an event near you. But beware, it is not cheap: $150 per person, maybe more.

The apt comparison would be a night out at a very fancy restaurant, where the purpose is to do something special. Outstanding in the Field is less fancy but more special.

Wednesday, May 9, 2007

The New York Times, Nielsen, and Margin of Error

The April 8, 2007, New York Times had an extraordinary self-indictment of numbers abuse:

Every Monday, a Times ranking of the top 10 prime time broadcast television programs uses a Nielsen rating that indicates how many households watched each show the previous week. On March 26, “60 Minutes” ranked No. 8 with a 9.2 Nielsen rating. (Each rating point represents 1.1 million homes.) With a margin of error of 0.3-rating point...there was no statistically significant difference between the rating of “60 Minutes” and any of the three programs above it in the ranking, or either of the two below it. With no mention of the margin of error, however, Times readers were left to believe the rankings really meant something.

Turns out omitting the margin of error is not new:

Over the past 25 years, only two of the 3,124 archived articles that mentioned Nielsen and “ratings” included a reference to the margin of error.

The piece was by Byron Calame, until recently The Times’ Public Editor. As “readers’ representative,” Calame independently investigated reader questions and complaints. In this case, he contacted Nielsen and questioned Times editors responsible for running the numbers.

The Nielsen spokesperson said the numbers were “estimates,” “should not be construed literally,” and lacked margin of error data due to resource constraints on Nielsen’s side.

Is that a problem? Paraphrasing a Times editor’s response: No one else shows margins of error, so what’s the problem?

Calame asked another editor why The Times did not at least tell readers that Nielsen does not provide the margin of error. The explanation is telling: “If we run a large disclaimer saying, in effect, this company is withholding a critical piece of information, I imagine many readers would simply turn the page.”

Okay, thanks for clarifying the priorities.

Calame’s piece called on The Times to do better, and if nothing else, The Times deserves credit for encouraging this criticism from within.

Monday, April 30, 2007

Is Predicting Hit Songs Futile?

I recently covered Columbia Professor Duncan Watts’ “cumulative advantage” experiment, in which similar groups of people started with the same selection of songs but ended up with different choices for which songs were hits. If people were just judging the songs on content, the groups’ choices for hits should have been similar. However, there was also a social factor: Except for a control group, each group’s members could see the popularity of songs within their group but not within other groups.

Professor Watts proposed that the divergent choices for hits were due to each group’s piling-on to whatever happened to be initially popular within that group. See the original post for details.

In my praising the experiment, I held back on some questions about the strongest claim in Professor Watts’ New York Times Magazine article. In essence, he claimed that predicting hits was futile due to the inherent randomness of social systems like the word-of-mouth that affects entertainment choices:

Because the long-run success of a song depends so sensitively on the decisions of a few early-arriving individuals, whose choices are subsequently amplified and eventually locked in by the cumulative-advantage process, and because the particular individuals who play this important role are chosen randomly and may make different decisions from one moment to the next, the resulting unpredictability is inherent to the nature of the market.

This effect was true of Professor Watts’ experiment, but is it realistic to have the early-arriving individuals “chosen randomly”? Isn’t there a relatively small percentage of people who act as tastemakers: people who are into new stuff first and whose knowledgable opinions influence others? If these people have non-random qualities, shouldn’t there be a lot more predictability?

The Limits of an Individual Influential

After some email back-and-forth with Professor Watts, I was surprised to find that the role of “influentials” is potentially a lot less than is commonly believed. In a draft of a paper due for publication later this year, Watts and collaborator Peter Dodds detailed their mathematical simulations of various scenarios involving influentials. The results were summarized in the Harvard Business Review’s Breakthrough Ideas for 2007:

Our work shows that the principal requirement for what we call “global cascades”—the widespread propagation of influence through networks—is the presence not of a few influentials but, rather, of a critical mass of easily influenced people, each of whom adopts, say, a look or a brand after being exposed to a single adopting neighbor. Regardless of how influential an individual is locally, he or she can exert global influence only if this critical mass is available to propagate a chain reaction.

To be fair, we found that in certain circumstances, highly influential people have a significantly greater chance of triggering a critical mass—and hence a global cascade—than ordinary people. Mostly, however, cascade size and frequency depend on the availability and connectedness of easily influenced people, not on the characteristics of the initiators—just as the size of a forest fire often has little to do with the spark that started it and lots to do with the state of the forest.

The researchers’ forthcoming paper makes a compelling case for these conclusions, exploring influentials’ role under many different scenarios. However, its various social-network models all start with the single “spark” of an individual discovering and communicating something. It does not consider a scenario where a large number of simultaneous and non-random sparks occur throughout the network. That is, if a single, random spark can cause a forest fire under the right conditions, how about a bunch of sparks purposely set at once, across that same forest?

The Potential of Coordinated Influence

The coordinated, multi-spark scenario matters because it is how certain social-marketing companies supposedly work: unleashing a small army of “on message” people to tell their friends about some great new thing. One might argue that a favorable newspaper review, radio airplay, or other one-to-many media do something similar, simultaneously “sparking” many consumers at once.

The key point: Instead of having a single line of sentiment that needs to propagate enough times to reach critical mass, the multi-spark scenario has many lines propagating, each of which could randomly run into other lines, thereby accelerating toward a critical mass.

Bringing this all back to the original New York Times Magazine article and its assertion that hits cannot be predicted, the multi-spark scenario is a way for hits to be predicted. In essence, it increases the prediction reliability by manipulating the system.

You may say this is unfair, like loading the dice, but it’s how entertainment marketing works. Companies spend marketing dollars in proportion to what they think will be popular, thereby making what they think will be popular more popular. Economically, the question is whether the cost of manipulating the word-of-mouth system is worth the increased probability of a hit.

Note that predicting a hit doesn’t mean being right all the time; it just means that across many attempts the gain is greater than the cost. Thus, even if you only went from a 3% hit rate to a 5% hit rate, predicting was worthwhile if it cost less than the benefit from those extra two percentage points.

So, I’m not ready to conclude that it’s futile for entertainment companies to predict hits. If the companies were merely acting as pure observers, then Professor Watts’ case would be strong enough for me. However, because entertainment companies’ predictions are often entangled with manipulating the systems being predicted, there may still be reason to try to pick winners.

Whether the benefits outweigh the costs...well, that’s another experiment to do.

Wednesday, April 25, 2007

Do You Like What You Like Because You Like What I Like?

An experiment:

  • Have a large group of people rate songs they’ve never heard before. Each person listens and rates privately so no one knows what others have done. If a person likes a song, he or she can download it. Call this group the “independent group.”
  • Now have another large group of people do the same thing with the same songs, except members of this group can see how popular the songs are with others. Call it the “social-influence” group.
  • Split the social-influence group into eight subgroups (“worlds”). Every world has the same songs, but a song’s popularity is counted only within that world. Thus, the social-influence group is split into eight parallel popularity contests.
The 4/15/2007 New York Times Magazine had a piece by Duncan Watts, professor of sociology at Columbia University, about this experiment. It was conducted via the Web with more than 14,000 participants. Professor Watts’ summary of the expectations and results follows.

First, if people know what they like regardless of what they think other people like, the most successful songs should draw about the same amount of the total market share in both the independent and social-influence conditions—that is, hits shouldn’t be any bigger just because the people downloading them know what other people downloaded. And second, the very same songs—the “best” ones—should become hits in all [eight] social-influence worlds.

What we found, however, was exactly the opposite. In all the social-influence worlds, the most popular songs were much more popular (and the least popular songs were less popular) than in the independent condition. At the same time, however, the particular songs that became hits were different in different worlds....

So does a listener’s own independent reaction to a song count for anything? In fact, intrinsic “quality,” which we measured in terms of a song’s popularity in the independent condition, did help to explain success in the social-influence condition. When we added up downloads across all eight social-influence worlds, “good” songs had higher market share, on average, than “bad” ones. But the impact of a listener’s own reactions is easily overwhelmed by his or her reactions to others. The song “Lockdown,” by 52metro, for example, ranked 26th out of 48 in quality; yet it was the No. 1 song in one social-influence world, and 40th in another. Overall, a song in the Top 5 in terms of quality had only a 50 percent chance of finishing in the Top 5 of success.

And why did this happen?

[W]hen people tend to like what other people like, differences in popularity are subject to what is called “cumulative advantage,” or the “rich get richer” effect. This means that if one object happens to be slightly more popular than another at just the right point, it will tend to become more popular still. As a result, even tiny, random fluctuations can blow up, generating potentially enormous long-run differences among even indistinguishable competitors—a phenomenon that is similar in some ways to the famous “butterfly effect” from chaos theory. Thus, if history were to be somehow rerun many times, seemingly identical universes with the same set of competitors and the same overall market tastes would quickly generate different winners: Madonna would have been popular in this world, but in some other version of history, she would be a nobody, and someone we have never heard of would be in her place.

I’ve quoted at length because I think it’s an ingenious and compelling experiment, well explained by Professor Watts. Although we all intuitively know the bandwagon effect, this experiment quantifies its importance in judging unfamiliar music. In this context, the results suggest we—the notorious average “we”—are quick to let what’s popular tell us what’s good.

Saturday, April 14, 2007

The Joshua Bell Experiment

As “a test of whether, in an incongruous context, ordinary people would recognize genius,” The Washington Post deployed violin virtuoso Joshua Bell as an anonymous street musician in a Washington, DC, commuter plaza. At his feet, open for donations, was the case of his $3.5 million Stradivarius.

In 43 minutes, Bell played six classical masterpieces. 1,070 people passed by with little to no affect. Seven people stopped for at least a minute. 27 people donated a total of $32, not counting the twenty-dollar bill Bell got from the single person who recognized him.

A fine piece of writing, The Post article describes the experiment and ruminates at length about what it might mean. I can’t improve on that, but I’ll add some commentary about a few of the numbers.

First, and this isn’t pretty, Bell’s response rate of around 2.5% is similar to response rates for direct-mail solicitations of credit cards, loan refinancings, and such.

Second, the article tells us that Bell plays concerts where the cheap seats go for $100. It’s easy to read that detail as an implied value for his commuter-plaza performance, as if the 97.5% of people who ignored him might as well have been walking past a hundred-dollar bill on the ground.

This presumes that because some people would pay $100 to see Joshua Bell, then that’s the value. It’s not. It’s the value to the people who paid $100, not the average passer-by on the street. Based on the experiment, the value of seeing Joshua Bell to the average passer-by was roughly three cents. (Don’t believe me? Divide the $32 Bell made by the 1,077 people that passed by.)

Of course, the people paying $100 are doing so for a formal performance, at a concert hall, with an admission fee, at a convenient time, knowing that Joshua Bell is the player. All of that is missing from the experiment. So how surprised should we be that most people ignored him?

Quibbles aside, the article is still a good, thought-provoking read. It’s gotten a lot of play in the blogosphere, suggesting that if the public can’t recognize anonymized genius, it can at least recognize interesting commentary about the public’s inability to recognize anonymous genius.

Sunday, April 1, 2007

Trojan Goldfish

We interrupt this blog for a special investigatory report.

Time is running out. To what, we don’t know. But a 45-year trail of clues is telling us something.

1962: Inspired by a fish-shaped cheese cracker she saw in Switzerland, Pepperidge Farm founder Margaret Rudkin “returns with the recipe” and introduces Goldfish snack crackers in the United States.

Unanswered in the historical record is where this “recipe” originally came from. Visual observation indicates that a Goldfish cracker is a three-way genetic crossing of a Cheez-It, an oyster cracker, and a goldfish. But is that really all?

This question matters because over time, Goldfish crackers have evolved—as if by some mysterious genetic code—from their original ecological niche as a cocktail cracker to the snack cracker of choice for small children. Along the way, Goldfish have spawned multiple variants that display emotions, personality characteristics, and the latent capability to influence a generation.

1973: Co-discoverer of DNA Francis Crick, with Leslie Orgel, propose the theory of directed panspermia, suggesting that the seeds of life may have been purposely spread by an advanced extraterrestrial civilization.

Although the typical interpretation of directed panspermia is about the origin of life on Earth, what if a group of Swiss scientists in 1959 came across recently arrived seeds of life, courtesy of comet debris still frozen after impact in the Alps? And what if careful analysis revealed that the ideal host for this new type of life was a baked-goods consumer product?

1987: Speaking to the United Nations General Assembly, U.S. President Ronald Reagan says: “I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world. And yet, I ask you, is not an alien force already among us?”

One year later, Goldfish crackers go into space aboard the Space Shuttle Discovery.

1997: Goldfish crackers appear with a smile stamped on them, the first change since their introduction. They become “the snack that smiles back.”

With this, Goldfish become more than passive objects of consumption. Goldfish become friends to their little consumers. Ingratiating their way into relationships by simply smiling back, are Goldfish setting the stage for something more than smiles?

1998-2004: Pepperidge Farm introduces Goldfish product variants such as Flavor Blasted Goldfish, Goldfish Colors, Giant Goldfish, Baby Goldfish, Goldfish Sandwich Snackers, and Goldfish Crisps.

Consistent with the evolutionary theory of punctuated equilibrium, Goldfish speciation occurs in an explosive six-year period following a 35-year period of stasis. The new variants replicate the primitive emotional apparatus of “smiley,” albeit for different market segments. 

2005: Pepperidge Farm announces that “Americans will be smiling even more as they get to know [Goldfish] in a whole new way as the fun-shaped snack comes to life in three dimensions.”

Embodied in the animated character Finn, Goldfish now have a figurehead to actively influence young minds. More than a year of market research shaped Finn to leverage the already “significant emotional connection with the brand” that Goldfish had attained.

2005: According to Pepperidge Farm, nearly half of U.S. households with children under 18 purchase Goldfish snack crackers annually.

While perhaps true, this statistic masks the well-known fact that 100% of children under age five eat Goldfish crackers on a near-continuous basis. The few parents that have tried to resist—such as those who sought refuge from Goldfish’s cheddary goodness by living in former nuclear-missile silos—still found their children innocently enjoying handfuls of Goldfish while watching Teletubbies.

In other words, while Goldfish were evolving their emotional and communicative capabilities, they were also accumulating market share, invited into American homes like little Trojan Horses.

2006: Pepperidge Farm announces a new ad campaign featuring Finn and three new Goldfish friends, Gilbert, Brooke and X-treme. Steve White, Vice President, Youth Snacks, commented: “We see this new campaign as a tool to begin to help teach important lessons and help instill values in kids in ways they understand and identify with, without being preachy or patronizing. The Goldfish characters’ distinct personalities and tales of everyday life are things every child—and adult—can relate to in an optimistic way.”

Having built the infrastructure for its own mini-religion, with a devoted following of millions, what “lessons” and “values” will be forthcoming? What panspermic messages did those Swiss scientists transfer into the Goldfish genetic code that have yet to be expressed? And given Goldfish’s recent rate of evolution, how long will it be until Goldfish are capable of human-like intelligence, and perhaps superhuman emotional, brand-building characteristics?

The stakes are high. We could preemptively try to negotiate with them now, before they turn America’s children against us. If so, do we take the obvious route and negotiate with Finn, or do we try to turn his new sidekicks against him?

The 85 billion Goldfish crackers produced each year are forward-positioned in diaper bags, pantries, and other strategic locales throughout the world. We don’t know their next move. What will be ours?

[Note to readers who arrive here from a search after April 1, 2007, when this was written. If you are unfamiliar with April Fools (or All Fools) Day, then be aware that the above is not entirely reliable, and thus you should not use it as a primary source for your term paper.]