Monday, December 12, 2005

Head First and Zig-Zag Learning

While in college, I had a job as a teaching assistant for freshman writing courses. Going in, many students struggled with writing because they didn’t get the process. They assumed that a good writer just wrote, as if taking dictation from an inner voice. Problem was, their inner dictators would never appear, leaving the students like frustrated wads of crumpled paper.

Because they were never taught to think, these students expected to be told what to write, if not by their mysterious inner dictator, then by me. My job was to change their perspective from seeking dictatorship to seeking their inner democracy—learning to consider many points of view, asking critical questions, and thinking for themselves.

For many, the big revelation was that thinking for yourself, and expressing it in writing, is messy. The result might look neat and tidy, but you don’t get there in one clean pass. It is a zig-zaggy process of discovery, where writing and thinking—and rewriting, rethinking, and talking to others—lead somewhere you can find only by taking the journey. It is a different type of learning from what most first-year students know, but when it clicks, you can feel a door opening inside.

I tell this story as background to my thoughts on an innovative series of technical how-to books, O’Reilly’s “Head First” series. The series presents subjects in terms of how people learn, which is more like the aforementioned zig-zaggy process of discovery than the orderly, linear approach of the traditional tech how-to book.

Accordingly, in the Head First series, redundancy is a feature, not a bug. The authors present key concepts in multiple ways, from different angles. You might get a relatively conventional explanation of an idea in one place. Later, it may reoccur within a dialog, a story, or a visual—or all of the above.

In Head First, visuals are big, especially pictures of people who act out key ideas, raise questions, and otherwise proxy for you, the reader. If the traditional tech how-to book is like the command line, then all these people pictures are like Microsoft Bob, but without Bob’s lobotomized version of user friendliness. Head First uses people as props for illustrating ideas, providing more immediacy and emotion than an abstract description can. It’s the social interface, on paper. And it’s not the only way things are presented, just another form of zig-zagging.

Finally, Head First’s writing style is conversational, not didactic—the books talk with you, not at you. And in the name of keeping you engaged, they employ humor and cleverness liberally. Mix it up. Zig and zag.

Although some tech types will regard these titles as O’Reilly’s answer to the “...for Dummies” series, Head First’s mission is more profound. The point is not to dumb-down a topic but rather to make it more learnable without sacrificing the substance of a high-quality intro text. So if the Microsoft Bob analogy brought negative connotations with it, let me restate the point more generally: The typical tech how-to book is the command line; Head First is the GUI, or at least a notable step toward it. Discuss.

Remember, the lesson about teaching writing was that writers learn what they’re saying zig-zaggedly. For complex topics, readers learn the same way. The question is whether the media they learn from helps or hinders that.

For more on Head First, see the series’ home page. Plus, for non-technical but business-relevant Head First commentary, I recommend the blog Creating Passionate Users  by authors of Head First books. That’s where I saw the announcement of the latest book in the series, “Head First HTML with CSS & XHTML,” which reminded me I had a blog entry waiting to get out on this subject.

So, to the people behind the Head First series, congratulations and thanks for moving the ball forward.

Thursday, December 8, 2005

Data Visualization Lessons from Gapminder

Having been around business analytics for more than a decade, I have seen many attempts at innovative data visualizations: techniques for graphically representing data that go beyond the bar, pie, and other charting classics. By now, the typical business analyst was supposed to be flying through 3D datascapes. But alas, the virtual jetpacks have not yet taken off.

Where I have seen progress is in a simpler form of data visualization that extends the charting classics with animation. By making chart elements active, a story can be told as the elements change—for example, by showing how a stacked chart’s layers build up.

A recent posting on TEDblog by June Cohen pointed me to a good example: Gapminder provides visualizations of United Nations data about various countries’ income and health levels. The graphic below is from the first presentation, which you can view at the Gapminder home page. Click its title (“1 Income”) in the green box in the middle of the page; when it appears, click the big arrow button at the bottom right to go forward through the screens.

Critics might dismiss these types of animations as eye candy, somehow below serious analytics. However, by that standard, charting itself could be called eye candy, since the underlying numbers are all you need—an argument that would find few takers.

What most critics actually fear is not animation itself but pointless animation such as PowerPoint transitions gone amok. Yet when it is done well, animation can do for charting what charting does for numbers: provide a more approachable and impactful view. That sounds a lot like the promise of better data visualization. So even if Gapminder-style animations seem like baby steps compared to 3D datascapes, the data-viz field may need to accept and build on baby steps to get to the long-promised leaps and bounds.

Anyway, decide for yourself. You already know that income and health are distributed unevenly throughout the world. See if Gapminder’s presentation brings the point home in a stronger way than you’ve seen before.

Sunday, December 4, 2005

“Don’t Trash California” Ad

Can you think of a memorable public-service advertisement from the radio? They are rare, but here is one that is both funny and effective. (The link is to a Windows Media Audio file. You’ll need to download it and then play using Windows Media Player, Winamp, or something else that plays WMA files. Go ahead. It’s worthwhile.)

As an exercise in rhetoric (persuasion through language), this ad uses two techniques that go back thousands of years:

(1) Instead of having a spokesperson earnestly tell you not to litter, it makes the point through a story, a modern parable with sound effects.

(2) The core argument is based on reciprocity—in this case, a twist on “Do to others as you would have others do to you.”

Mix these ingredients with fast-paced, clever execution, and you’ve got something that works.

Tuesday, November 29, 2005

Opportunities in Image Search

Below are the first 20 results of a Google Images search for a two-word phrase. What do you think the phrase is?

The answer: “computer salesman” (with the quotes)

I found one result (the Mac OS X guy) reasonable, and another (the woman with the iPod) on the borderline. That’s a 10% hit rate.

Among the other 90% were a wolf, a hydroplaning car, a stick figure, Elizabeth Dole, two books, and a guy standing by a telephone booth. Also present were several cartoons, some of which could be relevant if I wanted cartoons, but I didn’t. (The Advanced Image Search feature did not support filtering-out cartoon images, although adding “-cartoon” to the search got most of them.)

As far as I’m aware, Google Images’ results are primarily based on the text adjacent to the image, as opposed to deep analysis of image content. Thus, if we were to look at each result’s surrounding page, we would probably find something about a computer salesman. In some cases, this approach works—for example, searching “U.S. flag” yields good results. In others, per “computer salesman,” it doesn’t.

Two opportunities here:

  1. Need an instant party game? Have someone search for terms on an image-search site and then, based only on the results, let the audience guess what is being searched. The searcher can give hints like “you’re getting warmer/colder.” For maximum fun, search a word or phrase with largley (but not totally) misleading and surreal results.
  2. There is plenty of room for a better general-purpose image search engine. I say “general purpose” because specialized photo sites already do better by manually attaching keywords or tags to photos. Professional stock-photo sites employ people who do that; photo-sharing sites like Flickr spread the keywording burden among the user base. These efforts lead to better results, but they are limited to a much smaller universe of photos than all public photos Web-wide. So the opportunity is to make image search smarter while keeping the photo universe big.

Tuesday, November 22, 2005

Thinking About Eating

Two recent studies about the psychology of eating highlight the subconscious at work. Here is the first, by a professor at the University of Chicago:

Imagine two servings of ice cream, one featuring a five-ounce cup overfilled with seven ounces, the other a ten-ounce cup filled with only eight ounces. Objectively the under-filled serving is better, because it contains more. But a study conducted by Christopher Hsee found that unless these two servings are presented side by side, the seven-ounce serving is actually considered more valuable. Apparently, people do not base their judgment on the amount of ice cream available, which is difficult to evaluate in isolation. Instead, they rely on an easy-to-evaluate cue: whether the serving is overfilled or under-filled. Overfilling evokes positive feelings while under-filling evokes negative feelings, and these feelings dictate people’s evaluations. (from “More is Not Always Better”)

The second study shows that people think about eating or drinking more in terms of a food unit (I had a soft drink) than a portion size (it’s still a single soft drink, whether it’s in a 12-ounce soft drink or a 24-ounce bottle).

In one of their experiments, researchers at the University of Pennsylvania...

...offered a large mixing bowl of [M&Ms candy] at the front desk of the concierge of an apartment building. Below the bowl hung a sign that read “Eat Your Fill” with “please use the spoon to serve yourself” written underneath.

If presented with a small spoon, most passersby would take a single scoop, even though the sign encouraged them to take more. If given a much larger spoon, the subjects would still take a single scoop, even though that one scoop contained much more candy. The subjects were inadvertently eating twice as much candy when the larger scoop happened to be in the bowl.

“It is more than just people afraid of appearing greedy. They didn’t know they were being observed,” Geier said. “We have a culturally enforced ‘consumption norm,’ which promotes both the tendency to complete eating a unit and the idea that a single unit is the proper amount to eat.” (from “Just How Much Is a Serving of Dip?”)

Monday, November 21, 2005

Windows in the Rearview Mirror

The first few versions of Microsoft Windows are often cited as examples of half-baked software being foisted on the marketplace. As the legend goes, it took Microsoft until version 3.0 to make Windows marginally worthwhile.

So I was intrigued by the following, excerpted from Download Squad’s 20 things you don’t know about Windows 1.0:

After taking a look at a very early pre-release version of Windows in 1983, Byte Magazine declared it a system that would “offer remarkable openness, reconfigurability, and transportability as well as modest hardware requirements and pricing.”

In 1984, PC World said that Windows “provides a simple, powerful, and inexpensive user interface that works with most popular programs. That alone is enough to guarantee consumer support to make it the de facto standard of the personal computer market.”

Shortly after its release, PC Magazine gushed of Windows 1.0: “If you’ve ever complained about DOS and envied those more skillful at reaping its inherent productivity bonuses, Windows is just what you need. It makes dealing with DOS a snap and opens up all sorts of new possibilities. Once you try it, unless you’re already a DOS master, you’ll wonder how you ever got along in DOS without it.”

Do these words stem from the lowered expectations of a DOS-addled world? Are they the product of computer journalists practicing the power of positive thinking? Or was Windows 1.0 not as bad as legend has it?

Sunday, November 20, 2005

Where is the Pets.com of Web 2.0?

The question, “Are we in another bubble?” continues to circulate around the blogosphere. Recent answers are mostly flavors of no (for example, Scoble, Battelle, Malik). So let’s ask the question, what would qualify as a yes?

Measured by the NASDAQ, the first Web bubble really got frothy in 1999 and peaked in early 2000. Around then, a common model for consumer-focused Web start-ups went like this:

  1. Raise tens of millions in venture capital.
  2. Pick something that sounds good with an “e” in front of it or a “.com” after it.
  3. Hand a big chunk of your venture-capital money to Yahoo, AOL, and a few other portals in exchange for traffic; optionally, do a Super Bowl ad or similar big-media spend.
  4. Do an IPO based on your “momentum” from (3).

The bubble part of this arrangement was that the public markets bought it, rewarding now-infamous players with IPOs. With its sock-puppet mascot, Pets.com is perhaps the most memorable example. There were many others.

Fast forward to today. Where is the Pets.com of Web 2.0? I’m not talking about fallen stars; it’s too early for that. I’m asking whether any Web 2.0 company looks like one of the big dot-bombs in bubble mode.

When we see fledgling companies bank wads of venture capital so it can be spent primarily on marketing programs, and when the economics of those marketing programs don’t matter, and when the financial markets hand-out the rewards anyway, then we’ll know to ring the “Bubble 2.0” bell.

Of course, we can argue about other qualifications for a new bubble, but let’s just remember how high the bar was set by Bubble 1.0.

[See also the Bubble Calibration Instrument.]

Thursday, November 17, 2005

By and Largely Smaller

“By and large, we brown-bag it: 44% of Americans bring lunch from home to eat at work.” (Parade magazine, November 13, 2005, page 4)

I was disappointed that this statement did not have an accompanying graphic, so I made one.

Monday, November 14, 2005

The Flaw of Averages

There’s nothing like a life-or-death issue to illustrate an analytical problem:

[A new paper concludes] there are fundamental flaws in the way researchers usually analyze and report the results of medical studies, especially randomized clinical trials that are seen as the “gold standard” method for studying the effectiveness and safety of new treatments....

“Most studies currently emphasize the average risk and average benefit found in the study, but the average trial participant might get much less benefit than average, or even be harmed,” says lead author Rodney Hayward, M.D. “If nine people are in a room with Bill Gates, the average net worth of people in the room will be several billion dollars even if everyone else in the room is in serious debt.”

The authors argue for a more sophisticated form of analysis, risk stratification, which they found in only 4% of papers reviewed from prominent medical journals. To make their point, they cite a major 1993 study that showed the clot-busting drug tPA to be, on average, significantly effective for heart-attack patients.

But when Hayward’s colleague David M. Kent, M.D., M.Sc., now at Tufts University, analyzed the data from this study in a risk-stratified way, he found major differences in effectiveness of tPA. In fact, his analysis shows that 25 percent of the patients in the original study accounted for more than 60 percent of all the benefit in the entire study. Meanwhile, half the patients received little or no benefit — and some had such a high risk of brain bleeding from tPA that there was net harm.

The full write-up about the paper is here. Those who know marketing analytics will recognize that risk stratification is similar to segmentation. Just as smart marketers no longer pursue a singular, average customer, the paper’s authors are urging the medical establishment to be wary of studies about the average patient.

Sunday, November 13, 2005

Follow the Money with “Where Is George”

I received a five-dollar bill with a URL written along the top: www.whereisgeorge.com.

That’s George as in Washington, man on the U.S. one-dollar bill. The site is a collective exercise in tracking individual bills as they circulate. You can enter the serial number and series number of any major U.S. bill.

I did so for the bill with the URL on it. The bill had previously been tracked in Portland, Oregon, 20 days before. It had traveled an average of 25 miles per day.

The Oregonian who entered the bill had a profile. He is the night manager of an inn, and thus sees a lot of new bills. He has entered more than 1,000 bills, with only 8 follow-on hits so far. I’m number nine. The site will automatically notify him of the location I entered for the bill.

Take a look at one of the site’s all-time distance leaders: a bill that as of November 2005 has traveled more than four thousands miles over three years. Starting from the northern tip of Michigan, it went west to Panguitch, Utah, then south to the western tip of Florida, by way of several stops in Texas, Oklahoma, and Louisiana. Having subsequently been tracked in Kentucky and Tennessee, its latest sighting was Dayton, Ohio.

To do my bit for the collective effort, I entered the other two bills in my wallet at the time. Follow the money.

Sunday, November 6, 2005

Frequent Flyer Cards Keep Airlines Alive

On Marginal Revolution (an economics blog), Gary Leff uncorks some eyebrow-raising facts about how “several major airlines have been kept in the air purely to support their underlying credit card business.”

Great post, Gary.

Professional-Services Firms as Sales Channels

It was probably coincidence, but I’ve had a few recent conversations where start-up founders want to use professional-services firms as sales channels. The attraction is obvious: Unknown Start-Up isn’t being taken seriously by Big Company, but what if Unknown Start-Up’s was pitched by Big Company’s existing system integrator (or auditing firm or strategy-consulting firm or ad agency or whatever is appropriate for the start-up’s business)?

While I don’t claim to be an expert, I have some observations from experience. They apparently had value in those conversations, so I figured I would share here.

  1. Working through services firms does not solve your sales problems; it moves them. Where once you had to sell to customers, now you need to sell to services firms. For some companies, it’s a more fruitful path. But don’t kid yourself: You are adding another moving part to your sales machine. At least in the near term, it will require more effort and resources than going direct.
  2. Most services firms are as risk averse as their clients. Training their people to sell your technology, not to mention deploy it, costs firms money. If that investment is necessary just to establish whether clients will bite, why should they do it? They are not like venture capitalists, who can afford to make many losing bets in exchange for a single big winner. They need to keep their people billed-out to clients.
  3. The best entry point is through a professional-services firm’s client. You may not be able to get Big Company to consider your start-up by itself, but can you at least intrigue someone there enough to pass you along to Big Company’s professional-services firm? It’s a night-and-day difference between your asking a professional-services firm to check you out versus one of the firm’s clients asking that you be checked out. The latter can justify internal expenditures in the name of the client relationship; some partners might even get Big Company to pay for a small project, making an evaluation billable. Either way, it breaks through the problem in number 2 above.
  4. Deal with partners, not the firm. Most professional-services firms are partnerships, where the senior members of the firm are partners in ownership. Typically, each of the firm’s clients will have a partner in charge of the relationship. This partner is both chief sales rep and chief gatekeeper for that client. You must convince this person to be your advocate. You will not be successful going around him or her. Less obvious, you won’t be successful going above either. There may be higher-ranking people in the company, but it’s unlikely they will want, or even be able, to push your solution on other partners. Only when you’ve been part of multiple partners’ successes is there a chance for a practice to form around you, and, from there, the sales leverage to emerge.
  5. Don’t give up proximity to the customer. Some partners will want to insulate the customer from you, so the customer only deals with the professional-services provider. This is especially true of advertising and marketing agencies, less so of technical professional-services firms. Don’t let it happen. You want credit and referencability for your part. Also, if things go wrong, you need to know before it’s too late; otherwise, the professional-services team could leave you holding the bag, saying “the technology didn’t work.”

If this all sounds hard, it is. A lot of ingredients must come together the right way, so invest incrementally. After all, your prospective partners will only go deeper based on results; the same should hold true for you too.

Thursday, November 3, 2005

Anchor from the Eye of Destruction

News item, as reported by USA Today:

Aaron Brown, the cerebral anchor once touted as the “voice of CNN,” whom the network recently termed the “ice” to Anderson Cooper’s “fire,” has been sent packing after four years on the air.

Brown’s star has vanished while Cooper’s is rising: The move comes after a year of notable live reports from Cooper on natural-disaster stories — where television news careers are often made — from his searing coverage of January’s Asian tsunami to his recent reports from the path of destruction caused by Hurricane Katrina.

I think this move is only good for a year. By the end of next year’s hurricane season, I predict we’ll see CNN’s Dr. Sanjay Gupta in the big chair. He will eclipse Cooper by delivering 72 hours of continuous solo coverage from an ultralight inside a category 5 hurricane. America will be riveted by this compelling “eye of destruction” viewpoint, as well as the personal drama of Dr. Gupta’s performing elective surgery on himself, in the ultralight amid 170-mph winds, using only a Swiss Army knife and a cosmetics mirror.

Such is the furnace from which the modern anchor is forged.

Wednesday, November 2, 2005

Richard Branson, Analyst

Here is an excerpt from a recent interview with Richard Branson, founder of Virgin Global. Referring to his retail-music business, he basically says that the original concept is running out of gas, that they’re trying to evolve it to the next stage, and that they don’t know if it will work.

Q: You said at the opening of the new store in Los Angeles that you have to adapt to make sure that Virgin stores are here 50 years from now. How do you do that?

A: You really have to be a chameleon to be in the music business, and in any business actually. Nothing lasts forever. What we’re basically trying to be is a lifestyle shop, but very much reflecting the Virgin brand. So we have great books, we’ll have the best films, the best DVDs and lots of nice little touches. We still want to have the broadest range of music, but we can’t survive on music alone. I just don’t think there’s a future, I’m afraid, in that kind of store anymore.

Q: You have closed six stores in the United States and opened one. Any plans to close or open any other stores?

A: We’ve sorted out the loss makers now. If this store [the latest-generation Virgin Megastore, just opened in Los Angeles] works, you know we could do hundreds of them. But we’ve got to still make it work. And this industry is a tough one. Even trying to reinvent yourself is tough. Nothing is guaranteed. It’s a much tougher industry than it was 20 years ago. We’ve invested a lot of money in our music retail company. We’ll give people a big chance to see if they can deliver. And hopefully they will be able to.“

The last paragraph sounds so...realistic. It’s the celebrity CEO as analyst rather than cheerleader. I like it.

Sunday, October 30, 2005

When the Means Become the Ends

Organizations can easily do the wrong thing by mistaking the means for the ends. Following are a couple examples I ran across in the past few days.

Misleading Metrics

Airlines are subject to on-time rankings, a means to demonstrate reliability and thus customer satisfaction. However, these rankings are only based on non-stop flights, so they don’t consider flights that act as connections. And thus we have problems like what Ross Mayfield reports here:

It was abundantly clear last night when my connection was delayed that the airline industry is running on the wrong metrics. Half of the plane missed their connecting flight, most by minutes, when doors were still open, but gates closed — for sake of on-time-departure. The last planes left within a half an hour and we were left stranded in Virginia without hotel rooms in the vicinity.

Much of the airline industry operates via the “hub and spoke” method of multi-hop trips, so this kind of scenario is not a fluke. While it’s true that airlines will sometimes hold the connecting flight in close situations, the on-time metric creates a perverse incentive not to do so—despite the fact the metric exists as a proxy for customer satisfaction.

Method versus Mission

Former CIA Director George Tenet used to describe the CIA’s business as “stealing secrets.” In the CIA’s Studies in Intelligence journal, Stephen Mercado critiques this mindset for conflating a method (stealing secrets) with the organization’s mission, which is to provide actionable intelligence about national security. Mercado argues that another method—improving the CIA’s analysis of freely available information (“open sources”) such as from foreign newspapers—is more effective yet underutilized:

Despite numerous surveys putting the contribution of open sources anywhere from 35 to 95 percent of the intelligence used in the government, [open-sources intelligence’s] share of the overall intelligence budget has been estimated at roughly 1 percent.

Mercado argues that the “stealing secrets” mindset is so deeply ingrained in the CIA’s culture that the method has become the mission. After making an argument for open-sources intelligence, he advocates doubling its budget—to 2% of the overall intelligence budget—which is apparently a radical proposal.

Livening Up Asexual-Fungus Research

Earlier this week, Imperial College London issued a press release about research on Penicillium marneffei, an asexual fungus. In this context, asexual means an organism reproduces without a mate, cloning itself.

In what must have been a desperate attempt to get Cosmopolitan magazine to pick up their story, they titled the press release “Lack of sex could be a signpost to extinction, claim researchers.”

For the record, here it is.

Wednesday, October 26, 2005

Word of the Day: Deroach

I came across the word deroach back in my SRI days, when I often talked to people from the cable-television industry. In that industry, deroach referred to the process of refurbishing a set-top box after it had been returned by a customer. (Example usage: “Those boxes need deroaching.”)

Apparently, analog set-top boxes of the day sometimes ended up as unintentional roach motels, and thus the term. Unlike the better known debug, which brings to mind thoughtful diagnosis of a subtle problem, deroach is thoughtless disposal of an obvious problem—shake ’em out and move on.

I’m blogging this topic because when I searched Google for deroach, I was surprised to find only references to the word as a name for places and people. So for future searchers of the non-name deroach, perhaps you will find your answer in this entry.

And for the rest of you who have inadvertently read this far, that concludes today’s intersection of electronics, entomology, and etymology.

Sunday, October 23, 2005

War on the Wane?

Kudos to Chris Anderson on TEDBlog for highlighting a recent study about armed conflict worldwide, or more to the point, the lessening amount of it. Since peaking in 1992, the number of armed conflicts has dropped 40%. Larger conflicts (those with more than 1,000 battle deaths) are down 80%. As Chris asks, shouldn’t this be news?

For a quick take, read Chris’ post. Or for an executive summary of the research, see the Human Security Report 2005’s Overview.

Because it wasn’t in the report and the source data was easy to get, I created my own illustration of the good news (below). It shows, from 1946 to 2004, the number of nations along with the number of armed conflicts. This relationship matters because most conflicts occur within nations, as with insurgencies and civil wars. So the more nations there are, the more venues for conflicts within nations.

And yet...

I didn’t show it in the graph, but if you divide the number of conflicts by the number of nations, 2003 and 2004 are the two lowest years in the data set.

Although the number of armed conflicts is still well above zero, it is encouraging to see this forest from the usual trees.

Costing Out Email’s Manifest Destiny

Robert X. Cringely recently explored the cost of all 202 million American Internet users’ having Gmail accounts that actually consume the free 1 gigabyte of storage. Let’s call it the manifest destiny of email storage.

Cringely enumerated the costs of hard-drive hardware and data-center power necessary to make that storage available. He took the total, $30 million, to be a big number in relation to a “free” (that is, advertising-based) email service.

In a great response, Ethan Stock finished the math to show that, even with Cringely’s assumptions multiplied by five, the capital-expenditure cost of one gigabyte of email per American Internet user is 62 cents, and the yearly operational expenditure is 8 cents. As Ethan indicates, the news here should not be how expensive it is but rather how cheap it is. Paying for it requires well less than a dollar per year in advertising fees (meaning Google’s cut of the advertising spent) per Gmail user.

To be fair, near the end of his piece Cringely raised the ante by saying the addition of pictures and video will raise the cost by two orders of magnitude. However, that is a future scenario. Over the time it takes to happen, hardware costs and operational efficiences will have continued to improve. Not to mention, the average American’s email storage requirements circa 2005 are well less than the gigabyte that Cringely posited before raising the ante. So the real costs have a lower starting point.

Of course, it’s always possible to create scenarios where these services become uneconomic. For me, however, the lesson here is how much can be economic.

Finally, it’s worth noting that while Cringely thought that realizing email’s manifest destiny would be hard and expensive, he still thought it would happen. And thus, in classic American fashion, the question between he and Ethan is not about whether something that seems improbably ambitious will happen but rather when it will happen.

Tuesday, October 18, 2005

John Battelle’s The Search

I recently read The Search by John Battelle. It’s about how search has become central to the Internet economy and why today’s search businesses, epitomized by Google, are the beginning of something even bigger.

John talked to me for the book, so I’m not trying to be Mr. Objective here. I’ll just offer a perspective on its main theme.

To start, let me say that The Search dishes generous helpings of insider history about Google and other search players, past and present. John had access to most of the key people involved, so a lot of the quotes represent first-hand, new stuff. Given today’s rampant Googlephilia, the book would have been plenty successful if it stopped there.

The Search’s distinction, however, is that it threads Google’s story into a larger theme about what John calls the Database of Intentions: “the aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result.” (For the technical crowd, he does not mean “aggregate” in the analytical sense of summarizing/abstracting data; rather, he is referring to all the disparate bits of detailed behavioral data, accumulated together.)

No one company owns the Database of Intentions. It is spread among millions of Web sites, each of which collects its own data, as well as other network-based media (mobile applications, Tivo-style services, and so on). It turns out that the big search companies have among the largest concentrations of such data. Moreover, John argues that Goto.com/Overture was first, and Google has so far been best, at commercializing it on a large scale.

They have done so in a simple but extremely effective way, selling advertising associated with search keywords. When you search for “Sony VAIO,” you are expressing intent, massively qualifying yourself to a certain set of companies: How much is it worth to Sony to appear next to search results for “Sony VAIO”? What about Sony’s competitors? How about all the possible retailers of Sony VAIO products? Bidding is open for this and a practically unlimited number of other keyword combinations. And the kicker is, companies only pay when you click their ads, so the incentive to participate is high. Even a one-person small business can sign-up with a credit card. Hundreds of thousands have.

Now, let’s reinforce two important points:

  1. This new ad marketplace allows targeting down to super-specific niches. An example: In an attempt to get highly obscure, I searched “Charles Fourier,” the utopian-socialist philosopher of the 1800s. I got an ad targeted to people researching him. The company behind this ad might have paid a nickel for my click on its ad, but as John summarizes the business model, it’s “a billion dollars, one nickel at a time.” A substantial number of those nickels represent new spending in the ad economy, from small businesses that have never had a venue beyond the yellow pages, or from bigger business (like Amazon.com) that can efficiently sell products with niche appeal along with mainstream products.
  2. Companies pay for ad responses, not ad impressions. This is a major change. Because a large percentage of advertisers can calculate what an ad response is worth to their business, they are ready to spend more on advertising than ever, for as long as it continues to pay back.

That last phrase—“as long as it continues to pay back”—is crucial. It brings us to The Search’s big idea: that the Database of Intentions, in primitive form, is the key factor behind the current system’s rise. Whereas a traditional advertising venue would be bragging if it could offer 31 flavors of content or demographics, this new system allowed search engines to sell millions of flavors of intent. These were sufficiently targeted to make the pay-per-click business model a winner. And the whole thing was automatable enough to allow a marketplace where anyone could participate.

It’s a powerful combination of factors, yet John’s bigger point is that these factors are only starting to play out. They will apply to other media, like television, soon enough. And as the Database of Intentions evolves and is mined more intelligently, it will make “as long as it continues to pay back” go longer and longer for the average advertiser. With billions of extra dollars awaiting ever more efficient forms of advertising and micro-targeting, further growth is at hand.

This extra efficiency can come from many angles. The Googles of the world will get smarter about mining and leveraging full clickstreams, not just your keywords-of-the-moment. Sites, or networks of sites, specialized in certain areas will make search and advertising more powerful by injecting domain-specific knowledge into their systems. John cites GlobalSpec, a site specialized in engineering parts, as an “intelligent island” that has manually created associations of concepts from its world. These associations—a Semantic Web (that is, web of meaning)—in turn can make searching and advertising smarter. Meanwhile, an informal version of the Semantic Web is emerging with user-created tags for various Web pages, which provides yet another kind of grist for the intelligence mill.

Finally, I would not underestimate the potential of people taking their data into their own hands, something that John does not delve into. In a world driven by data about intent, your intent becomes a kind of currency. As I’ve written elsewhere, the next Google may be a company that does not use data about people as a proprietary asset but rather becomes an asset manager for people’s data. Given the Database of Intentions’ privacy and societal implications, which John raises in the book, this type of approach has much to offer. It may well also be economically optimal.

Whatever happens, The Search makes clear that we’re far closer to the beginning than the end of this important story. Kudos to John for going deep to tell it, conceptualize it, and popularize it.

Wednesday, October 12, 2005

Bubble Calibration Instrument

Post the 2005 Web 2.0 conference, angst is rising about whether irrational exuberance has returned. In an effort to address this situation, I have designed a Bubble Calibration Instrument, pictured below.

Naming In Need Of Taming

Amid China’s real-estate boom, new housing developments are appearing with aspirational names like “Aladdin Gardens” and “White House Mini District.” Alarmed that many of these names have a foreign influence, the city of Kunming is taking action. As reported in China Daily:

“The fashion for foreign sounding names on buildings is a loss to native culture and reflects poor taste,” [Kunming Communist Party Secretary] Yang said in remarks reported by the official Xinhua News Agency. “We must correct this practice immediately.”

So does the French government have a new ally in the battle against cultural imperialism? Sort of. Turns out the policy’s casualties will include “Paris of the East Plaza” and “French Gardens.”

In related news, I was recently talking to someone from China who mentioned “Tycoon City” and “Live Like a Kaiser” as further candidates for housing developments with naming in need of taming.

Tuesday, October 11, 2005

Weather Entrepreneurs

Here are a couple news items about unusual, weather-related entrepreneurial efforts:

  • The Economist profiled a Canadian engineer, Louis Michaud, who wants to create artificial tornadoes as a source of power. If you’ve ever seen a wind farm, you know that humans already get power from wind. The traditional challenge has been to engineer ever more efficient wind turbines to convert wind to power. By contrast, Michaud is attempting to engineer more powerful wind. In essence, he wants to create the conditions that give rise to a natural tornado. The result would be a real tornado, albeit one (according to Michaud) confined to a single place and controlled in intensity, and thus instrumentable for generating power.
  • The New York Times had a long article about companies attempting to do business in the Arctic. Included is the story of Pat Broe, who in 1997 bought a disused port in northern Canada, paying the Canadian government $7 (yes, $7; $10 Canadian at the time). But now, with the Arctic ice cap having shrunk to its smallest size on record, Arctic shipping lanes are becoming possible for ever longer stretches of the year. For some ships, these lanes can offer shortcuts that save thousands of miles. And conveniently, Broe now has a port along one of the key routes. He’s estimating potential revenues up to $100 million yearly. He also owns the rail line out of the port, which he snagged after the Canadian government denationalized it.

Apparently, playing the weather futures markets wasn’t enough fun for these guys.

Sunday, October 9, 2005

My Freakonomics Encounter

This weekend I had the opportunity to chat with Steven Levitt, professor of economics at the University of Chicago and co-author/subject of the best-selling book Freakonomics. Levitt is famous for finding unexpected answers to real-world questions via quantitative analyses. For example, which is more dangerous to a child: a household with a gun or a household with a swimming pool? (Answer: When Levitt looked at cause-of-death data for children in the United States, he found that swimming-pool-related deaths were roughly 100 times more prevalent than gunplay-related deaths.)

If such nuggets interest you, you’ll love Freakonomics. His co-author Stephen Dubner does for Levitt what Michael Lewis, author of Moneyball, did for baseball’s sabermetricians and quants, bringing the numbers to life with well-told stories.

That said, the in-person version of Levitt was remarkably similar to the voice of the book. He’s not an ivory-tower type that Dubner had to decode for the world. If fact he comes across as instinctively interesting. By that I mean his research seems motivated entirely by what intrigues him personally, but when he talks about it, you can’t help wanting to follow along: Is sumo wrestling rigged? Do real estate agents act in your best interest? Is the 1990s’ drop in crime related to the legalization of abortion twenty years earlier?

Levitt tends to focus on societal questions, but those of us in business analytics should thank him. Through Freakonomics, he is getting ordinary people interested in the value of using data and analytics to understand problems. Given that the alternatives—conventional wisdom, intuition, and “common sense”—are much easier for people to relate to, this is progress.

A few other Levitt resources for those interested:

  • Freakonomics Blog — Levitt and Dubner keep the freak-out going, including pointers to their pieces that appear in The New York Times.
  • Levitt’s Papers — For those accustomed to academic papers and college-level math, you’ll find Levitt an unusually clear writer. Also, the economics part of his work is more prominent here than in Freakonomics. I particularly liked the empirical analysis of gambling in the National Football League.
  • Treating HIV Doesn’t Pay — Levitt mentioned this research about AIDS in Africa by Emily Oster. She used a Freakonomics-style analysis that generated surprising conclusions about the most effective way to minimize loss of life, given the fixed amount of money available. It may make disquieting reading, but there’s no question it matters.

Thursday, October 6, 2005

Attention Trust, Clickstreams, and the Meaning-Mining Problem

Attention Trust is an organization designed to let Web users take control of data they generate online. For example, the organization recently announced a Firefox plug-in that lets you “record” your clickstream. The idea is that you could potentially share it with companies, presumably for something of value in return (a better site experience, product recommendations, money, whatever). TechCrunch has the most straightforward description I’ve seen.

How exactly such clickstream sharing will work is apparently to be determined. A big challenge will be what I call the meaning-mining problem: having just a clickstream is like having just an index without the book; to make the clickstream useful, you need to understand what it points to.

Let’s illustrate with an example. A clickstream is just a sequence of URLs that you visited. A URL like...

http://www.amazon.com/exec/obidos/tg/detail/-/B000005JA8

...is a 57-character text string that has little meaning by itself; it only points to meaning. Request the URL’s page and you’ll see a music CD, Trout Mask Replica. It’s an album originally released in 1969 by Captain Beefheart, a relatively avant-garde artist. Further bits of significant detail are available either directly on the Amazon.com page or from secondary sources, which are another jump away. For example, if we know we are dealing with a music artist, a secondary source might be All Music Guide’s moods. For Captain Beefheart, they include “difficult,” “eccentric,” “cerebral,” “manic,” and “uncompromising.”

Now, if I’m a marketer, these bits of meaning provide clues about what the person who clicked this URL might like—not just in music but also in other media and many consumer-product categories.

Obviously, our example URL is a single data point, which can be misleading. But clickstreams tend to comprise lots of data points, especially if collected continuously over periods of time. So if you’ve been researching a car on the Web over the past few weeks, I know more than a few auto companies that would love to see your clickstream. Or, to be more precise, they’d love to mine the meaning of your clickstream: What category of car are you looking for? What brands are you considering? What price range are you considering? And so on.

The meaning-mining problem is important because these types of high-value questions are answerable if you can start with a relevant clickstream. But the meaning-mining problem is hard because machines are still mediocre at getting from the clickstream to reliably useful meaning. Of course, a human could do the job, following each link and then the secondary sources, but that doesn’t scale.

The vision of a Semantic Web is meant to help machines with these kinds of problems. In the meantime, today’s search engines can get part way there by extracting meaningful features, like keywords, from Web pages. Now that major search engines like Yahoo and Google have open APIs, I expect someone to make a Web service that takes a sequence of URLs and returns a set of coherent keywords that collectively “profile” the URLs’ immediate content. It will be a productive start.

Speaking of which, Attention Trust deserves credit for its own productive start in bringing these types of issues to higher prominence. Tools like the clickstream recorder are especially useful because they bring tangibility to what otherwise tend to be academic-ish discussions.

However, delivering on the promise of putting people in control of their data will likely take a bigger player than Attention Trust. A lot of resources will be necessary to address the meaning-mining problem, as well as several other technical and practical (chicken/egg-style) obstacles. The success factors are:

  1. Potential access to users’ full clickstreams (by owning the operating system, as Microsoft does; by being an Internet Service Provider (ISP) like AOL and Microsoft are; by partnering with ISPs like various search engines do; by having a browser toolbar, which could operate like Attention Trust’s clickstream recorder).
  2. Proximity to a huge number of users who can quickly generate a critical mass of use for the technology.
  3. A massive technical infrastructure to collect and mine clickstreams’ meanings and to make those meanings exchangeable among individuals and companies (or, for that matter, among individuals and other individuals).

Microsoft, AOL, and the major search engines are the obvious candidates, although an eBay or Amazon.com are possibilities too. It will be interesting to see which, if any, of these companies is first to make the necessary mindshift—and take the necessary risks—to go from using data about people as a proprietary asset to becoming an asset manager for people’s data.

Wednesday, September 28, 2005

Rogue Bloggers and Double Agents

Perhaps you’ve heard of Mini-Microsoft, an anonymous blogger at Microsoft. According to BusinessWeek, he “may be the most notorious blogger on corporate life,” because he frequently attacks Microsoft from within, names people who should be fired, and otherwise airs dirty laundry by the cart-load. Mini claims he’s dishing tough love, telling truths that need to be told before the company can reform itself.

So, at the height of his mainstream-media notoriety, it was interesting to see Mini go ga-ga over Microsoft’s Company Meeting 2005: “I think our customers are going to be delighted silly this coming year! ... Any vestiges of doubt or ennui get blown away once you actually see what we are on the verge of shipping.” (His review of the meeting starts a little ways down on this page, under “Post Company Meeting.”)

I am not doubting Mini’s authenticity, but this turn of events inspires a question: How long will it be until a PR agency orchestrates this type of prodigal-son moment on behalf of one of its clients?

That is, if a company already suffers from bad press and low credibility, who better to be the change agent than a conveniently anonymous rogue blogger? Like Mini, the rogue blogger would have enough inside dirt to make some news, getting the mainstream press to anoint him or her as a tantalizingly credible source. Some of that dirt might even be things the company wants to get out but can’t officially say—for example, that a recent executive “resignation” was really a firing for poor performance (message to shareholders: we know that needed fixing).

Having built credibility with a mix of juicy tidbits and question-authority attitude, the rogue blogger could then, at a critical moment, lock onto company messaging: “I have seen the light!” Or the rogue blogger could subtly change perspective over time, grudgingly giving ground to the relentless progress the company is making.

Of course, this rogue blogger would be a double agent, operating in the darker parts of the ethical gray zone. Most legitimate companies and PR agencies will not go there.

But for the inevitable ones who do go there, will it work? I hope not, but we might never know if it does.

Google Playing Clickstream Catch-Up

Lately, there’s been a lot of informed speculation about Google’s forays into WiFi access points and dark fiber. Last night, Scoble weighed in thusly:

So, why do I fear Google’s wifi? Well, if you own the last few yards in between people and the Internet you can really learn a lot. You can watch everything those people click on, what pages they visit, what browsers they use, how often they turn on Skype, and a lot of other stuff.

Isn’t it the case that Microsoft already has this data on millions of MSN subcribers that use the MSN network to access the Internet? Same for AOL. And depending on its ISP agreement with SBC, perhaps Yahoo too.

The point: When it comes to having access to full user clickstreams (not just those at google.com), Google is actually playing catch-up. So if you fear Google in that regard, you should have plenty of fear to spread around.

Yet elsewhere in the same post, Scoble seems impressed that MSN search “learns from usage patterns.” No indication of whether these are usage patterns just at MSN search or from the larger MSN clickstream pool, but either way it hints at this coin’s flipside: Clickstream data can be helpful not just to companies but to Web users. It can be used in the aggregate for improvements to the audience-wide user experience, or it can be used at the individual level, providing differentiated, personalized experiences.

Today, companies mostly use clickstreams at the aggregate level, because profiling and personalizing via the clickstream requires much more technical effort; it also requires finding an elusive sweet spot where user benefit and trust outweigh privacy risks. But a site that finds that sweet spot, as arguably Amazon.com has done, has enormous advantages.

Now let’s up the ante and talk about the advantages of finding that sweet spot not just at the site level (what you do at, say, Amazon.com) but Web-wide (what you do everywhere): Who will be best at collecting and using people’s full clickstreams in a way that everyone wins enough to participate?

Because of the technical and privacy challenges, the contestants are barely out of the starting blocks, which makes Google’s game of catch-up a potential game of leapfrog. I feel a Google Don’t Be Evil™ opportunity coming on.

Sunday, September 25, 2005

Harvesting Power from Human Motion: Small to Large Scale?

You may have read about the backpack or combat boot that generates power from its wearer’s walking. These are small-scale examples of “power harvesting” from human motion. They work by clever use of devices and/or materials that convert motion to power. The power generated is small but potentially enough for a mobile phone or other personal electronics. And if we get a little futuristic, the power could be used for wearable computers or smart clothing.

For me, these technologies invite the question of scaling-up. For example, can a highway overpass be instrumented to harvest the vibrational motion from thousands of heavy, fast-moving cars and trucks? Can the overpass’s surface be adapted to harvest power directly from contact with those cars and trucks’ motion?

How would this work? A core technique of small-scale power harvesting is the use of piezoelectric materials. When these materials bend or stretch, they create power. Instead of having a small wafer of piezoelectric material in a shoe, how about thousands of wafers arrayed throughout an overpass? What if they were connected to the surface so that they bent and stretched with the forces of multi-ton vehicles constantly zipping by?

This isn’t my area of expertise, so I don’t know the answers. But the questions illustrate the more general opportunity of thinking bigger about harvesting (some might say recycling) energy that humans are already expending. From a societal point of view, it would be a welcome area for innovation.

Thursday, September 22, 2005

Why You Should Read the Feed

[Update 3/13/2012: This post still exists for historical reasons. Bloglines is gone, and much of what people used RSS for is now being done with Twitter. However, I still find RSS useful. I currently use Google Reader for RSS reading.]

If you have not discovered Web feeds (aka RSS feeds), you should. To illustrate why, I’ll cut straight to an example.

I use a “feed reader” Web site called Bloglines. I have told Bloglines which Web sites I visit regularly. Bloglines now automatically checks them, notifying me when anything new appears.

Why does this matter? I’ve saved time and hassle in two ways: (1) I don’t need to check sites only to find that nothing has changed. (2) When things are new, I can rapidly scroll through all the changes at all the sites, skimming for the stuff I want to read.

And this is not just about blogs. Big publishing sites like the BBC, CNET, CNN, and the New York Times have feeds too. Most sites you like probably have a feed already; those that don’t will have feeds soon.

If this does not seem significant to you, try it. I know very few people who have gone back to the old way.

How You Can Get In on the Action

There are many ways to read feeds. I recommend you start with a Web-based feed reader like Bloglines or (for MyYahoo users) the RSS-reading features in MyYahoo.

A Few Notes to Help You Get Started

Just like a site has a URL for its home page, a site has a different URL for its feed. The difference is, the feed page is designed to be read automatically by systems like Bloglines, not by humans.

When you ask a service like Bloglines to track a feed, you provide the feed’s URL. You are then “subscribed” to that feed. But don’t worry: It’s not like a subscription where the site knows who you are; it’s anonymous. Also, almost all feeds are free, as are services like Bloglines.

When you are at a site, look for icons like this:

  

Although these three images are just examples and thus are not clickable, similar images usually link to feeds. My clickable feed image is on the right side of every page on this site. If you don’t see it, scroll up near the top. The image has a link associated with it. Just copy the link into your feed reader. Or, if you have a feed-discovery feature associated with your browser (here’s an example for Bloglines), it should find my feed when it is applied to any of my blog pages.

Sometimes a page will have multiple feeds in different formats with names like RSS and Atom, perhaps with different version numbers. If you use a major feed-reading service or product, any of them should work.

Monday, September 19, 2005

Companies as Products

With the tech sector heating up again, it’s a good time for venture-funded entrepreneurs to think not only about their companies’ products but also their companies as products: Unlike your actual product, your company-as-product operates in a capital marketplace of investments, IPOs, and acquisitions. Managing this marketplace matters as much as anything else you do, both in normal times and especially when financial markets get exuberant.

A scenario to consider:

Advances in technology have led to a small but fast-growing market with huge potential. Jack’s start-up has the best product and is executing well. Jill’s start-up is a me-too clone of Jack’s, created later and still playing catch-up.

Although her product is less-developed, and her business is well behind Jack’s, Jill is far more aggressive than Jack in the company-as-product marketplace. The IPO market is hot, and she gets there first, selling the concept of her company—which is a clone of the concept for Jack’s company—to the public markets.

As Jack watches from the sidelines, he knows that his business is the one that actually has the substance behind Jill’s story. But when Jill’s company raises $100 million in cash from the IPO, Jack has a problem. Jill now has the resources to shore-up her product and, in the meantime, overwhelm Jack’s marketing and sales efforts with five times the feet on the street. In addition, Jill is bathing in the free publicity of being the poster child of this hot new market.

Now the investment bankers are knocking on Jack’s door, saying he could be like Jill, having a big IPO. But having gotten out first, Jill can use her resources to damage Jack’s IPO. Ouch…she just used some of her IPO cash and inflated stock to buy the one company Jack was actually worried about, a bunch of rocket-science types whose next-generation technology is disturbingly good. Now that company’s technology is in the hands of Jill and her expanding sales and marketing machine. Between this move and her PR from the IPO, she is defining the playing field.

As Jack prepares his IPO, he now is on the defensive for being the me-too player.

Is this fair? If you said no, you missed the point. Jill leapfrogged Jack because she seized an opportunity in the larger system, using the company-as-product market to vault her forward in the actual-product marketplace. It’s fair as long as you realize that the game is played at both levels, especially in times when financial markets are exuberant and thus less discerning than they should be.

The lesson: Although many tech entrepreneurs would like to just build great products, building a great company can also require playing like Jill, or at least actively blocking Jill-like competitors. That requires thinking about companies as products.

Consumer Apps for Real-Time Biofeedback

Look for some interesting companies and products to come from the commodization of technologies related to real-time biofeedback. Let’s illustrate with an example.

Yale Professor Robert Grober has developed a device that lets you hear your golf swing. Ingredients: an instrumented golf club, with sensors that wirelessly transmit data to a receiver, which in turn converts the golf club’s telemetry data into an audio soundscape. Different swing parameters contribute to the soundscape, allowing the golfer to “visualize”—through sound—his/her swing as it happens.

The “as it happens” part is critical, because the core skill in golf is developing muscle memory to swing and putt correctly, consistently. Getting the feedback of what you’re doing while you’re doing it is much better than, say, reviewing a video of your swing.

Grober’s company, Sonic Golf, has a Web site, which explains why this product is becoming real now:

Sonic Golf products are enabled by the convergence of four technologies, each one of which is driven by an existing industrial base: 1) silicon based sensors, including MEMs accelerometers (automotive industry) and gyroscopes; 2) low power hand held electronics, including micro-power microprocessors and micro-power A/D converters (PDA and cell phone industry); 3) digital music synthesis (sound cards, computer multimedia/gaming applications); and 4) the IEEE 802.15.4 communications protocol (Zigbee) for extremely low power (mW), relatively low bandwidth (100s kB/sec) wireless communications.

In other words, the key enabling technologies are available off-the-shelf, at reasonable cost. Besides the obvious variations on Sonic Golf for other sports, how else might these technologies be applied?

Let’s start with a system for improving posture. If you are unaware of this market, a Froogle search for “posture” yields 19,000 listings for product prices at retailers. As with a golf swing, learning to sit and stand correctly is well-suited to biofeedback using real-time sound. However, the market is no longer just golfers; it’s everybody. (Cue ominous voiceover: “Bad posture not only affects how others perceive you. It hurts your health.”)

What about a smart chef’s knife, which uses sound to tune the user’s skills at slicing, dicing, chopping, mincing, and such? How about a bicycle that teaches kids to ride it?

If we assume that component costs drop so low that almost anything can be “sonified,” what happens? Smart chopsticks, anyone?

Sunday, September 18, 2005

High-Definition Lettuce

In terms of lettuce, I grew up a member of the Iceberg Generation. Our lettuce was a greener shade of pale, but what it lacked in taste it made-up in crispness. On sandwiches, that crispness broke the mouthfeel monotony of Oscar Meyer mystery meat on Wonder bread. But if one were to experience iceberg lettuce as the featured attraction, such as in a “salad,” it was just roughage.

In the past decade, the U.S. population has begun an ascent up the lettuce hierarchy of needs. Where once our ancestors foraged only for iceberg at the local supermarket, now they return with a plenitude of choices: hail-caesar romaine, post-Popeye spinach, the weedeater’s frisee, and other varieties with code-names like “arugula” and “butterhead.”

Living in Northern California, epicenter of lettuce actualization, I bring a report from the future. One of the best restaurant dishes I ever had was a recent salad. It was a small head of organic butter lettuce and a simple mustard-vinaigrette dressing. That’s it. No croutons, no crumbled feta, no nothing.

If iceberg lettuce was like black and white television, and typical Northern California organic greens are like color TV, this was high-definition lettuce. Beyond that phrase, I won’t try to relate the experience. My only point is to say that the trend toward better lettuce continues, not just in variety but in taste. So if you see a suspiciously spartan lettuce dish at a high-quality restaurant, try it.

[For the record, my butter-lettuce epiphany dish was at a San Francisco restaurant called La Suite, corner of Embarcadero and Brannan, now defunct.]

It’s Like Golf But with a Shotgun

I’d like to thank Eli Marcus, who in a recent conversation enlightened me to the existence of “sporting clays.” When I heard, “It’s like golf but with a shotgun,” I had to know more.

Turns out his description was not an embellishment. Wikipedia’s Sporting Clays article tell us: “Sporting Clays is a clay pigeon shooting sport. Often described as golf with a shotgun, the sport differs from skeet and trap shooting in that it involves shooting clays at various locations which are launched at different velocities and angles.”

Over to the Sporting Clays Magazine FAQ for a little extra color:

With variations in trap position, trap speed, shooting position, and flight paths of different types of clay pigeons, targets can come through the trees, from under your feet, straight down, over your head, quartering, going away, left to right, right to left, and in any path a real bird might choose. The key words are unpredictable, variable, and sometimes bordering on impossible.

If bass fishing can be a televised sport, and my cable system has 500+ channels, I’m wondering what kind of market failure is responsible for the lack of sporting clays coverage on TV.

SRI Media Futures Program

SRI International is one of the world’s largest independent research organizations. When I was there in the early to mid-1990s, you could walk across the campus and pass groups working on artificial intelligence, economic development programs for post-Soviet states, improvements to the public-education system, an easy-clean oven surface, military communications networks, and cancer drugs. If you’re into inventions and innovations, definitely check out the SRI timeline.

At SRI, if you could find government or corporate entities to fund your research, you could largely do whatever you wanted. In 1991, I and two colleagues, Ed Christie and Paul Di Senso, decided that the world needed a research program about the future of digital media. This was before DVDs, DirecTV, digital cable, Tivo, and the rise of the commercial Internet. The outlines of these technologies were becoming visible on the horizon, so we bet that many companies would want to know what was going to be real when.

We somehow wangled internal seed money to get the program started and thus was born the Media Futures Program. It was a “multiclient” research program—that is, many client companies would each kick-in a yearly fee and then receive back the sum-total of research. In essence, it was a market-research business, albeit with an SRI twist. We combined quantitative survey research about consumer demand, an engineering view of technology feasibility and costs, and business analysis of strategic, competitive, and regulatory factors. Put another way, we did original research on what people wanted, what technology could deliver at what cost, and which companies were likely winners.

I was one of the two research leaders. That meant that I managed half the projects, presented results to clients, and participated in much of the sales effort. I also had the fun of getting hands-on with a lot of different research tasks: evaluating the early HDTV prototypes, creating a Monte Carlo simulation of factors involving video-on-demand uptake, and authoring a lot of documents about audio and video compression (technologies that would enable MP3 players, digital-cable boxes, and many other devices).

[Now that Google has made the Usenet archives searchable back to the 1980s, the technically inclined can see what we were talking about when today’s common digital-video technologies were being hashed out in 1992-1994.]

As the idea of “digital convergence” heated up—thank you, Wired magazine, for casting cable and telco CEOs as rock stars—our little research program got big. Apple, AT&T, Disney, Microsoft, Philips, Sony, and most of the “Baby Bells” were clients, as were dozens of others U.S., European, and Asian players.

Ironically, we got most of our notoriety from making negative assessments. In 1992-1993, interactive TV and video-on-demand were the big ideas, which everyone was pushing hard. Our research indicated that the proponents were living in a fantasy world in terms of their ability to deliver such services at anywhere near economical costs.

We were not trying to be contrarians. In fact, we said that interactive television and video-on-demand would make sense someday, but a set of intermediate products and services would emerge before. The most important of those were DVDs and broadcast digital video, which we called correctly as the big winners of the late 1990s. But because we questioned the near-term commercial viability of interactive television and video-on-demand, that was “news.”

In market research, being known as negative is not usually a ticket to success, but we called things as we saw them. And for those willing to read past the headlines, we highlighted plenty of opportunities in what were then seen as less-sexy areas. Ultimately, however, the final verdict came when the clients, whose visionary CEOs we were undercutting, kept renewing their participation in the program.

We also did consulting projects for specific companies. My favorite was a job we did for Technicolor’s CEO and executive team. We helped them decide to get into the manufacturing and distribution of prerecorded CDs and DVDs. This was well before the first DVD was released, yet they went with our call that DVD was going to be big. It was not a slam dunk given the failure of two previous formats, analog videodisc and VideoCD. However, Technicolor went on to become a dominant player in DVD manufacturing, producing more than a billion DVDs per year.

It was a great time and a great team: the three founders plus honorary founder Michael Gold (abducted from SRI’s engineering labs) and exceptional colleagues Adam Gross, Dave Rader, and Joyce Thom.

In 1994, I and two different SRI researchers had started another program, iVALs, which focused entirely on the Internet as a medium for measuring and understanding what consumers wanted and, ultimately, using that knowledge to personalize media back to them. That meant I did progressively less Media Futures until I and the iVALs colleagues left SRI at the end of 1995 to form Personify.

But I’m pleased to report Media Futures kept going strong, continuing to scan the horizon of new digital-media markets as they emerged. Its current incarnation is Digital Futures, part of the SRI spin-off SRI Consulting Business Intelligence.

Personify Retrospective

Personify started before the dot-com boom and outlived most of those claimed by the bust. Yet its six-year run included an intense dosage of the good, bad, and ugly of that era.

I led the original founders in January 1996, and I stayed for the whole thing, through August 2002. My official title was CTO, but at one time or another I also sold our products, served on the board, deployed software in the field, was interim CEO, hacked out random fixes for bad customer data, cut partner deals, and did whatever else needed doing. Together, these activities were the best input possible for my most frequent role: something like a VP Products, working with a team of exceptional people in defining, designing, and delivering products that were among the most advanced of their type.

The Technology

Personify was one of the early players in enterprise Web analytics. For the sake of a definition, “Web analytics” means software that helps measure and analyze a Web site for the purpose of making it more effective. “Enterprise” means the software does enough—and costs enough—that buying it is a corporate decision, not something a Webmaster expenses on a credit card. At this level, our initial competitors were start-up companies Accrue, Andromedia, and NetGenesis.

The name Personify aptly summarized the vision: (1) enable a Web business to analyze and personify its audience, to understand what people wanted not as a whole but as segments and even individuals; (2) then use the learnings to make better business decisions and, ultimately, to personalize what people see.

Today’s analogy is when Amazon.com learns about you and then adapts your experience accordingly. For the late 1990s, it was a big vision to fulfill as a software provider. Your solution needed to work across a wide variety of sites, which didn’t have Amazon.com-like resources.

Although the competition could spin similar visions, their technical foundations were Web-site reporting tools. These tools were engineered for large-scale reporting on site traffic (page counts, session durations, bytes downloaded, and such) but ill-suited for what came to be known as CRM (customer relationship management), where the center of the data universe is customers (or, in this case, Web-site users). We were the only ones taking a CRM approach to Web data. We had privacy-protected user profiles, data-mined segmentations, and on-the-fly analysis of behavioral, purchasing, and demographic attributes. For 1998 and many years after, it was a unique feature set.

The Value

With Personify, a marketer could discover (via data mining) a set of behavioral segments, evaluate each segment’s relative value to the business, and then explore what made each segment click: Which outbound advertising campaigns are delivering my “Core Buyers” segment? Are my on-site content investments paying-off for the “Researchers” segment? Is my latest wave of promotion just drawing the low-value “Price Predators” segment?

From day one, Personify customers were able to explore and answer these questions instantly. Later, we added a module that integrated with email and Web-site-serving systems, so Personify data could drive personalization—for example, “Target this special email only to ‘Core Buyers’ who have not visited in three months.”

In sum, we were in the business of enabling “difference marketing,” where a business adapts itself to the key differences among customers, rather than making every decision based on Joe or Jane Average. In traditional direct marketing—the businesses behind the catalogs and credit-card solicitations you get in the mail—difference marketing had already proven the superior approach. But because difference marketing was fueled by data, and Internet data was like a new kind of jet fuel, there was an opportunity to do even better.

The Ascent

We were seed-funded in spring 1996 by U.S. Venture Partners. We spent the rest of the year as a garage shop called Affinicast, proving the basic opportunity and technology. Having passed that test, we spent 1997 building the product and the core of a company to market, sell, and support that product. We also changed the name to Personify.

In February 1998, we launched the company and previewed the product at Internet Showcase, where we won a “Best in Showcase” award.

The first official release was in June 1998, by which time we already had paying, referenceable customers and a fan club of analysts and industry types.

1999 was pure growth, as we scrambled to hire sales and support personnel to chase the demand. To reinforce the core, we acquired a 30-person professional-services company, Anubis, which specialized in data-warehousing technology. It was a great injection of talented, dedicated people.

The Peak

By early 2000, Personify was widely regarded as a hot commodity. We had just raised private money at a $500 million valuation. We had turned away multiple acquisition approaches and were preparing an IPO filing. Having watched companies in our competitive set go public to billion-dollar market caps, our investors were hungry for a home run, as were the employees, all of whom already knew somebody who got dot-com rich.

By the peculiar financial standards of the day, we looked good. We had already doubled our 1999 revenue in the first quarter of 2000 alone. For that quarter, we reported $1.25 million in revenue, using a conservative accounting practice that only recognized the value of a contract over time. When you combined the fast growth with the conservative accounting’s trailing indicator, it was obvious that we were heading for something like $10 million in new business that year.

The other standout for Personify was the customer list: Our IPO filing listed more than 50 customers, many of whom were household names. Given the media’s interests at the time, we probably got more buzz out of serving Drugstore.com, eToys, Petopia, and DrKoop.com than we did Barnes & Noble, Volvo, J. Crew, L.L. Bean, or REI. A lot of these companies were dabblers in what we enabled—sophisticated clickstream data mining, analytics, and targeting—but they paid real money nonetheless.

We filed to go public in May 2000, two months after the NASDAQ peaked. At the time, no one knew it had peaked. When we filed, the NASDAQ had bounced back near its position at the start of 2000. But the markets were jittery, and the Internet IPO window was in the process of closing—for years, it turned out.

The Fall

Despite the non-IPO letdown, we ended up exceeding $11 million in new business for 2000 anyway, acquiring customers such as American Century, Bose, Continental Airlines, the New York Times, Nieman Marcus, and Volkswagen. These names reflected the fact that in early 2000 we stopped pursuing the dot-coms, raised our prices, and aimed squarely at corporate America. As a result, the average deal size climbed above $250,000. But despite doing bigger deals, with bigger customers, and capturing far more revenue, the Personify of early 2001 was no longer worth $500 million, or anything close to it.

This irony was not unique to Personify. As the Internet boom hit its limit, the super-inflated valuations of Internet companies gave way. To put it in airline-safety speak, a loss of cabin pressure occurred, but the oxygen masks didn’t drop. Valuations had gotten so disconnected from business fundamentals that when the fall came, the decompression was massive, overwhelming whatever other factors might normally contribute to a business’s value. For a lot of Internet companies, the plane outright crashed; for those like Personify, it leveled-off just above the treetops.

In 2001 and 2002, our (former) dot-com customers were not the only ones suffering. More important for us, big corporations were hurting. They were putting enterprise software purchases on indefinite hold, whether from Internet upstarts like us or from the likes of Oracle and Siebel. Everybody’s numbers were way down. The real question was who had the cash to survive until things got better, whenever that might be.

It didn’t get better soon. Although we had enough substance to raise some additional private money in the early part of the bust era, it was a far cry from an IPO’s proceeds. So we did as virtually all others and managed the business’s costs against the falling revenues. That meant multiple waves of layoffs in 2001 and 2002.

By 2002 Personify reached the point where our products would win deals and then the company’s financial condition would un-win them. A prospect would want to buy but be concerned about whether we’d be around next year. Assuming that other prospects would have the same concern, each would walk away, putting us on the wrong side of a self-fulfilling prophecy: No one buys because they think no one else will buy.

We had not helped our cause when, in 2001, we absorbed a start-up that wasn’t viable alone but had some cash. It also had some debt. The deal gave us the appearance of being a consolidator in a consolidating market, but it was too little, too late. The cash was nice, but the debt was a time bomb, with a fuse set for mid-2002.

The Final Cut

By mid-2002, we were running out of time and alternatives. In an effort to circle the wagons, we agreed to be acquired by the only other then-independent company from our original competitive set, Accrue. As of late July 2002, we had a signed term sheet, the final documentation was prepared, and all Personify employees and customers were going to be carried forward.

Then a bad thing happened, symptomatic of a time when bad things happened a lot: A creditor we had inherited from the 2001 cash/debt deal was now in bankruptcy. Too many of his boom-financing deals had gone bad. So, despite having signed-off on the the Accrue deal’s term sheet, the creditor scuttled the deal on the day it was to be finalized. Instead of signing, he made a play for our remaining cash by effectively killing the company.

This event took down Personify, which paid-out what was left to creditors and closed in August 2002. Accrue later ran out of gas as well. After a bankruptcy filing, its assets were resurrected as a different company.

Meanwhile, the next generation of enterprise Web analytics players had emerged in the late 1990s. They were either new companies (founded a few years after the first generation) or smaller, lower-end providers that were graduating to serve the enterprise. The new generation’s offerings had less functionality but were packaged as easy-to-buy application services. For the bust era, cheap and easy was the right value proposition. Although corporations could not make six-figure, enterprise-software purchases, they could find a few thousand dollars a month out of a marketing budget. And because they could pay monthly, the “going concern” issue didn’t matter as much.

The best of that new generation (companies like Omniture, Coremetrics, and WebSideStory) would later fill-out their features and, with the economic recovery, raise prices to resume where the first generation left off.

Clever versus Stupid

“It’s such a fine line between clever and stupid,” observed David St. Hubbins in the movie This Is Spinal Tap. Let’s explore that line.

In retrospect, Personify should have been acquired. We declined multiple opportunities. But at the time, an acquisition was not so obvious. Here is why.

When we did our private financing at a $500 million valuation, Personify was substantially undervalued against companies with similar financials that had already gone public. Yes, a huge portion of everybody’s valuation was hot air, but those who went public transformed their hot air into piles of cash, maintaining independence in the process. It was an appealing path. Looking back, it was arguably the only path for an Internet enterprise-software company like Personify to have a shot at remaining independent through the bust, taking the boom’s loose money and squirreling it away for the long winter.

And it wasn’t just about the money. Personify had taken the big-vision approach in a time of great change. It was a rare chance to build a company of consequence. A quick way to kill that chance would have been to pull the ripcord on an acquisition, thereby becoming a cog in someone else’s machine.

Of course, the IPO path was only open for a limited time. We narrowly ended up on the wrong side of the line, that fine line. And thus history tells us that we should have sold out to an acquirer at the top, notching a $500 million win for the employees and investors. But this lesson amounts to little more than “buy low, sell high.” (Thanks for the tip.)

Endings as Beginnings

The good news is that the dozens of people who comprised the Personify core have long since gone on to great things. (Because Personify had so many heroes, I’ve avoided naming-dropping a few at the expense of others. They are all deserving. I’ve also omitted discussion of a few villains because life is too short.)

Of the first 80 employees, 15 have since been founders of companies, often together in pairs or threes. For my part, I went on to start another company, which was acquired by CNET in late 2004. Although it was a different kind of venture, it was informed by many Personify lessons.

As for the technology, last I heard it was still running at several sites. That included one doing the whole vision—analytics integrated with personalization—on a terabyte-scale Personify database. My heart warms at the thought.

And on that note, I say this: If you do a start-up, you’d better enjoy what you work on and who you work with, because a big payoff may come or not—and statistically speaking, you’ll usually be looking at not. Even on Personify’s worst days, there was always the quality of the core people—of whom there were many, in all parts of the company—who thrived on doing what had not been done before and doing it well. If instead we were slogging something just to make a buck, the game for most of us would have been lost before the outcome. I’m glad I was a part of it.

SRI iVALS Program

In 1994 and 1995, I led a research program at SRI International called iVALS (Internet Values and Lifestyles). It was the dawn of the commercial Internet, and we were among the first to measure, analyze, and segment the Internet audience.

I had been at SRI since 1990, mostly in another research program, the Media Futures Program, which was about the commercialization of digital-media technologies. A regular theme of the Media Futures Program was that when a networked, interactive form of digital media arrived, the game would change. Not only could the consumer actively control what he or she saw, but an electronic newspaper or catalog could adapt itself to the consumer’s interests by learning from the consumer’s behavior. Compared to broadcast media or print, it would be more like a two-way conversation.

In the early 1990s, interactive television (ITV) was supposedly going to be the medium that made all this real. We studied ITV in enough depth to realize that the technologies of the day were too costly to use at large scale, no matter what the visionary CEOs proclaimed.

But something else was happening that most people didn’t notice, whereas we had a front-row seat: the dawn of the commercial Internet. SRI had been on the receiving end of the first Internet packets in 1969 and since then had a deep Internet culture. Thus, we were among the first to see and use new Internet technologies of the early 1990s, such as Gopher, WAIS, and this thing called the World Wide Web.

When the Web appeared, we realized that it could economically scale to reach millions of people in the near term. People already had computers, and it required far less network bandwidth or server technology than ITV. The Web could be the networked, interactive medium to change the game.

At the time, the only other contenders were commercial online services like AOL, CompuServe, and Prodigy. However, they were more about email and message boards than enabling anything like Web sites. In addition, they were closed, fee-based networks that were not compatible with each other. In contrast, the Web was free and open, the two key ingredients for its hypergrowth.

I and two colleagues at SRI, Adam Gross and Bruce MacEvoy, bet that the Web would matter. We created iVALS to explore what it meant to measure and analyze the most data-rich medium ever. I led the program but special credit goes to Adam for being the earliest evangelist of the Internet’s importance.

By a coincidence of history, SRI was the birthplace of VALS (Values and Lifestyles), the best-known system for analyzing the U.S. population psychographically—that is, by people’s beliefs and values. Bruce was a consumer psychologist and statistician who worked on VALS2, a revamped system designed to analyze and predict consumer behavior. His consumer-research angle plus Adam’s and my digital-media angle equaled iVALS.

Our first project, in 1994, was the creation of a Web site where people could take a short questionnaire and receive their VALS2 type, including a dynamically generated description. It was one of the earliest database-driven Web applications with adaptive content. More than a decade later, a descendent of the site still exists, although I see they’ve changed the names of some of the types and removed the dynamically generated descriptions.

With thousands of people taking the questionnaire and getting typed, we had one of the first images of the early Web audience. 50% of the respondents fell into a segment of highly-educated professionals, which comprises only 10% of the U.S. population. It underlined the point that the 1994-1995 Web audience still largely reflected the Internet’s academic and corporate-research heritage.

The specialized nature of that era’s Internet audience called for a specialized psychographic system, which led in 1995 to the iVALS typology, an Internet-specific set of segments. The purpose was to capture factors inherent to the new, technical nature of the medium. For example, to understand online behavior, it didn’t matter if a person was highly sociable if that person didn’t know how to use a chat application (this was before the days of chat being built into Web pages or user-friendly instant-messenger apps).

The point of the exercise was to model why people did, or did not, do things online, using a combination of psychographic and technical-capability factors. Companies were just starting to ask how the new media was different from the old, and we were among the first with answers. Being at the edge of that frontier was a great experience.

As we did our work, we would run into people from the relatively few Web companies of the day, like Yahoo, Excite, and HotWired. Everybody loved what we were doing, but they all said the same thing: “You need to start a company.”

We did, and that company became Personify.