Sunday, March 30, 2008

Rhetoric and the Visual Word

Go back a few thousand years and you’ll find the Greek origin of the word rhetoric. It means the art of effective speaking, especially to persuade.

A few hundred years ago rhetoric broadened to include effective writing as well as speech. Writing was good for rhetoric because it allowed greater complexity of ideas. To get a sense of the difference, try composing an essay entirely in your head.

A few years ago it started to become clear that the definition was due to broaden again. Low-cost and no-cost tools emerged for creating and distributing short-form animations, screencasts, and videos. The result has been an increasing number of multimedia shorts that do rhetoric in ways the written or spoken word cannot.

A couple examples:

On paper or a Web page, “data portability” is unlikely to engage Joe or Jane Average. But in 1 minute and 50 seconds, this video does an admirable job of making the topic matter.

More ambitious is Professor Michael Wesch’s Web 2.0 ... The Machine is Us/ing Us. In less than five minutes, he tackles big, abstract, and technical issues. So far, the video has nearly 5 million views on YouTube, which is in the same league as the YouTube classic Diet Coke + Mentos.

These examples illustrate that multimedia—the visual word—is good for rhetoric because it can make complex ideas engaging and understandable for nonspecialists. In our world of ever deeper, ever more specialized knowledge, this accessibility has real value.

Moreover, because the costs of creating and distributing multimedia content are high relative to text, the short form is a natural. This has the benefit of forcing concision—the opposite of the mathematician Pascal’s quip, “I have made this letter longer than usual, only because I have not had time to make it shorter.” In our time of information overload and attention scarcity, concision is king.

So from the spoken word to the written word to the visual word, we are evolving toward both greater complexity of thought and wider understanding of complex subjects. What we are seeing today is the beginning of multimedia rhetoric’s part in this evolution. And you can say you were there.

Monday, March 17, 2008

Eastward

It’s not often that you pack up your life and take it across the country, but that’s what we did in early March. One day we were living in a San Francisco urban highrise; the next we were amid snow-covered fields in a town outside Hartford, Connecticut.

A few weeks later, I’m pleased to say, “So far, so good.” Our two-year-old daughter has happily adapted to new everything. And—the reason we’re here—Jacqueline is now an executive at a Fortune 100 financial-services company based in the area.

I am continuing my CNET duties, working out of a home office but traveling regularly, including to San Francisco. So to those in my Bay Area network, I’ll be around. And for friends and colleagues in New York and Boston, I look forward to seeing more of you. Hartford is about halfway between, a couple hours by car.

Leaving Northern California, there is much to miss. But I like the idea of change when the circumstances are right. Long story short, the circumstances were right.

Wednesday, March 12, 2008

Correcting for the Human Factor in Movie Ratings

A recent Wired article, This Psychologist Might Outsmart the Math Brains Competing for the Netflix Prize, is about Gavin Potter, a retired management consultant who is singlehandedly yet effectively competing against corporate and academic research teams in the top tier of the Netflix Prize.

(The Netflix Prize is a $1 million challenge to anyone who can exceed the performance of Netflix’s movie-recommendation algorithm by 10%. Netflix provides a big database of its users’ movie ratings as grist for the contestants’ mills. It also provides a means to test contestants’ predicted ratings against users’ actual ratings, thus measuring accuracy. Although a 10% improvement may not sound like much, I’ve previously discussed why it is not easy.)

The leading research teams are each exploring variations of statistical/machine-learning approaches, looking for new refinements to relatively well-understood algorithms. While Potter no doubt uses one or more of the standard algorithms, he has apparently gotten a long way with few resources by correcting for well-known behavioral quirks that affect how people rate things. As he puts it, “The fact that these ratings were made by humans seems to me to be an important piece of information that should be and needs to be used.”

The article provides an example:

One such phenomenon is the anchoring effect, a problem endemic to any numerical rating scheme. If a customer watches three movies in a row that merit four stars — say, the Star Wars trilogy — and then sees one that’s a bit better — say, Blade Runner — they’ll likely give the last movie five stars. But if they started the week with one-star stinkers like the Star Wars prequels, Blade Runner might get only a 4 or even a 3. Anchoring suggests that rating systems need to take account of inertia — a user who has recently given a lot of above-average ratings is likely to continue to do so. Potter finds precisely this phenomenon in the Netflix data; and by being aware of it, he’s able to account for its biasing effects and thus more accurately pin down users’ true tastes.

Admirably, the article goes on to consider the obvious pushback:

Couldn’t a pure statistician have also observed the inertia in the ratings? Of course. But there are infinitely many biases, patterns, and anomalies to fish for. And in almost every case, the number-cruncher wouldn’t turn up anything. A psychologist, however, can suggest to the statisticians where to point their high-powered mathematical instruments. “It cuts out dead ends,” Potter says.

Potter’s approach reminds me of ELIZA, a computer program from the 1960s that used simple psychological tricks to impersonate a human—for example, repeating someone’s statement back as a question (“My boyfriend made me come here.” “Why did your boyfriend make you come here?”). Although ELIZA did not know what it was talking about, it often did better at engaging people than far more sophisticated programs that actually tried to understand and respond to what was being said.

While I’m not suggesting that Potter’s work is the algorithmic sleight-of-hand that ELIZA was, he is nevertheless tapping the same success factor: exploiting the humanness of the humans in the system. Not only does it work, but in a contest like the Netflix prize, it is particularly effective because the other leading contestants apparently were not doing it.