Tuesday, September 11, 2012

At Responsys

An update from the professional front: I have joined Responsys as SVP Product Management.

Responsys is a software-as-service provider for interactive marketing: It helps companies communicate with customers via interactive channels like email, web, social, mobile, and display ads. As people increasingly spend time online, these channels are where marketers want to be. Plus, compared to traditional channels like television or print, in which you get the same message as everyone else, interactive channels promise to be more targeted and personalized—so you get what’s relevant for you.

Having been in the interactive marketing field since the beginning, I believe in this promise. It is good for companies and customers alike. The challenge is to make it real. Responsys is a leader in doing so, having gone public in 2011 (ticker symbol MKTG) after growing straight through the late 2000s recession.

So, I’m excited to help grow something that already has substantial scale, in a market that keeps renewing itself with new channels and technologies. If you want to join me, we have many opportunities across the company for like-minded people.

Monday, September 3, 2012

Back in the Bay Area

I knew I was back in the Bay Area when:

  • On the second day, exiting Highway 92, I was behind a Google self-driving car.

  • My new town’s waste-collection system gave me three cans: a big one for recycling, a big one for composting, and a small one for garbage.

  • I was walking along a trail, wearing an E*Trade hat I randomly acquired in the past. A guy walking the other way asked, “Do you know what E*Trade closed at today?”

Yes, after four-plus years in West Hartford, Connecticut, we–my wife, daughter, and I–are back. As I and others have said before, West Hartford is great. But for us, the Bay Area is home.

Thursday, July 26, 2012

Omnivark’s Time and Place

Today saw the final edition of Omnivark, my personal project to teach a computer to identify great writing. Each day, Omnivark would pick three pieces of new, nonfiction writing on the Web, plus a book. I’m proud of Omnivark’s quality during its six-month run. (Feel free to click a random day from the archive, and see what you think.)

So why stop now? Omnivark was something different and fun to do during the past months while I was also doing consulting. However, that period was an in-between time, from when I completed the Intelligent Cross-Sell integration at RichRelevance until my family’s move back to the Bay Area, which is happening in August. Soon after, I’ll be resuming my normal career—more on that in a future post.

Suffice to say, Omnivark was of a certain place and time, which are changing. I enjoyed creating it; I hope you enjoyed reading it. If you did, here are some suggestions for alternatives to keep your tank topped with great reads.

And finally, for those interested in how the technology worked:

Omnivark’s Division of Labor

As I taught a computer to recognize great writing, the division of labor between me (the teacher) and the computer changed over time. The Omnivark software started relatively stupid and ended relatively smart.

At the beginning, I was heavily influencing Omnivark’s daily picks of great writing on the Web. This was necessary to create a training set of great writing for Omnivark’s algorithms, so they could learn to find other great writing. Over time, and a lot of experimentation, Omnivark became smart enough so it could do most of the work in choosing great reads for an edition.

But to be clear, there was a big distance between Omnivark’s doing most versus all of the work. Most of the work was Omnivark’s finding and scoring hundreds—sometimes over a thousand—of candidate pieces a day. I still needed to pick the best of Omnivark’s best, factoring in issues like diversity of topics and sources.

Unless I was working on the algorithms, I would need to read only perhaps ten of the top-scoring candidates. From them, I often was able to find two of the three Web picks for an edition. I’d find the third pick by scanning further down the list of candidates for an interesting headline, or I would find it from my own normal Web browsing, or I’d occasionally get a great suggestion from an Omnivark reader.

The Omnivark algorithms were capable of rating not only an entire piece but also individual sentences. Once in a while (maybe one in ten times), I’d agree with Omnivark’s choice for the best sentence to use as a quote from the piece. The low agreement was due to Omnivark’s simply judging a sentence for artfulness, whereas I was also judging how well a sentence indicated what the piece was about. Also, I could easily see when multiple sentences, or fragments of sentences, were better than the best sentence—a far from easy task for software.

The fourth and final pick in every Omnivark edition was from a book. This turned out to be a distinct challenge because book excerpts are often in PDFs or special viewer applications such as Amazon’s “Look Inside.” Automating their extraction was different enough from everything else I was doing that I ended up picking the books manually, guided by high user reviews and official endorsements.

So, except for the book picks, the division of labor shifted nicely from human to computer. In terms of replacing a human’s hours, Omnivark got maybe 90% of the way there. However, that last 10%’s hours are a lot harder to automate than the first 90%’s. They involve creative judgment such as knowing when different picks go well together, or recognizing that a certain sentence captures the essence of a larger point. Perhaps someday a computer will do that too, but there will always be the need to model specific humans’ judgments; otherwise, it would be like having the same editor for every magazine. In that sense, humans will always be the teachers.

Thursday, June 21, 2012

Stylistic Signals

As Omnivark trawls the Web for new, great writing, it has two distinct tasks. First, where does it find the candidates—the articles, essays, the blog posts—that might be great writing? My previous post, Following the Elites, was about this challenge.

Second, once Omnivark has a set of candidates, how does it know which few are great? For example, given an entire issue of The New Yorker, what is the best thing in it?

The New Yorker’s editor might say it’s all great. And different readers will surely have different opinions of what’s best. So to clarify: In this case best means most like the structure and style of other great reads. (The other great reads were classified as such by a human expert.)

Note that we are comparing texts’ forms, not their topics. So, given a great read about a boar-hunting congressman, Omnivark will try to find more pieces that are written like that, as opposed to more pieces about boar-hunting congressmen.

This is an important distinction. Most text-analytics systems do topic-matching (find more boar-hunting congressmen). Omnivark is about style-matching. Omnivark will measure a new piece of writing against the characteristics of great writing that Omnivark has already modeled. Those characteristics include statistical, semantic, and structural properties of the text. Some examples:

  • Simple statistical properties include the text’s total number of words, the average numer of words per sentence, and the average number of sentences per paragraph. These simple metrics are better for filtering-out the bad than discerning the best among the good. However, more complex metrics (such as the ratio of nouns to adjectives) resonate with certain writing styles.

  • Semantic properties refer to the meanings of the words used. This is tricky because we want to capture how word choices correlate with style but not with topic. We don’t care that boar appears a lot in the boar-hunting piece; we do care about the artful usage of certain adjectives, adverbs, and other flavoring words, the use of which makes the prose more expressive.

  • Structual properties include how sentences and paragraphs are put together. For example, the use of balanced or parallel phrases is an indicator of expressive writing, as is the use of similes and metaphors. Detecting these structures in a general way is hard.

In the world of search engines like Google, these properties are called signals. Omnivark’s job is to know the signals that best predict great writing. As an extra twist, because great writing takes different forms, Omnivark needs to employ different configurations of signals.

Behind the scenes, I built a tool that makes exploring for signals relatively easy. A new signal can be tested in real time on a set of training texts diverse in style and quality.

For me, this exploration for stylistic signals is the most interesting part of creating Omnivark. Having taught writing, I have reasonably good instincts for prose quality. However, knowing it when you see it is different from generalizing that knowledge into a computer. In practice, it’s easy to identify signals that find great writing but also find a lot of mediocre writing too. It is much harder to find the signals that cleanly discern the best from the rest.

Wednesday, June 13, 2012

Following the Elites

In a perfect world, Omnivark’s software would read everything published on the Web each day, then pick the best three “great reads.” That perfect world is not available. But can we find a more practical path to the same results?

With Omnivark, I’ve explored several approaches. In this post, I will focus on the most obvious and, it turns out, cost-effective: embrace elitism. By that I mean track the top publications where the top writers appear. You can argue whether the list of publications should be 20 or 200 long, but either way it’s nothing compared to the millions of other entities—minor publications, blogs, Tumblrs, Quora postings, and such—that comprise “everything.”

The Atlantic Wire’s “Five Best Columns” daily newsletter exemplifies this approach. It appears to draw from a short list of usual suspects: The New York Times, The Washington Post, and a handful of other top newspapers and highbrow magazines/Websites. The results are quite good.

With Omnivark, I use a much wider array of inputs, and the algorithms ignore a piece’s source. (In a similar vein, by intentionally omitting the source publication’s name from the preview quotes, the Omnivark site encourages readers to judge the preview quotes by their quality, not by where they come from.)

Still, Omnivark ends up with a lot of material from that same group of usual suspects. The reason is, true to reputation, they are venues where superb writing appears in volume. This combination of quality and quantity is hard to beat.

As support, consider Longreads, a crowdsourced site that highlights new, long-form nonfiction. Anybody can nominate a piece from anywhere, usually via the Twitter hashtag #longreads. But despite the potentially wide spectrum of nominations, the site’s official picks are still mostly from elite publications.

I doubt the Longreads editors are suppressing non-elite stuff; if anything, I suspect they welcome the chance to boost something obscure yet worthy. But I also suspect most of the (non-spammy) nominations are for pieces in elite publications because of the quantity/quality reason above.

Plus, when nominations are an open process, another factor helps the more popular, elite publications like The New York Times or The New Yorker. They have thousands of times more readers (and Twitter followers) than smaller publications or independent bloggers. So if the same quality of piece appears in the typical blog and The New Yorker, the New Yorker piece will have thousands of times more potential nominators.

All this goes to say that curating just from the elite publications is a good bang-for-buck strategy. It exploits the concentration of high-quality material in relatively few places.

And if you want to take it a step further but keep the bang-for-buck efficiency, you can also track the elite writers directly, such as by following on Twitter. That way, you can catch his/her work outside the elites without needing to trawl for it generally. Byliner.com seems to take this approach, as well as commissioning its own pieces.

In theory, an additional benefit of following elite writers is that they can recommend good stuff by other writers. In practice, it works a little, but writers in elite publications often just recommend other stuff in elite publications. Perhaps an apt analogy is with Major League Baseball players, who can talk all day about other MLB players but don’t think as much about what’s happening in the minor leagues.

Of course, this just makes me want to focus more on writing’s equivalent of the minor leagues—the non-elite venues where good stuff lurks deeper and more dispersed. However, if the goal is to surface great writing, today’s lesson is that much of it is already near the surface, in the elite publications where it’s expected to be. Distilling the best of that best is valuable, as the Atlantic Wire’s newsletter and Longreads show. The open question is, how much extra value is there in plumbing the depths further?

Tuesday, May 29, 2012

McMeasure It

Well into Manohla Dargis’ New York Times dispatch from the 2012 Cannes Film Festival is a word worth savoring, McMeasured:

The festival’s prejudice toward — or, more generously, its loyalty to — favorite auteurs has been routinely held against its programmers, as if filmmakers and their works should only be McMeasured by the millions and billions served.

It’s quality versus quantity in a single, artful word.

Monday, May 7, 2012

Why Omnivark?

When I introduced Omnivark a few months ago, many people asked, politely: “Why?” (Quick recap of what it is: The Omnivark Web site helps users discover great writing. Each daily edition highlights three new, nonfiction pieces on the Web, plus a recommended book.)

Omnivark is not about me clicking around the Web all day looking for great writing; it’s about teaching a computer to do that. I did not previously mention the computer’s role because I wanted people to evaluate Omnivark for its content, not its process.

Behind the scenes, the process includes software programs that sift thousands of new Web pages per day, looking for a rare gem. The problem is, physical gems have standardized measures of clarity, cut, and size. The written word lacks equivalent measures, especially to discern great writing from good writing. (Quantifying bad from good is more tractable.)

Lack of measures does not mean lack of agreement about greatness—for better or worse, there are widely acclaimed publications, writers, and pieces. The problem with measuring greatness is the diversity of ways writing can be great. Hemingway’s terseness and Faulkner’s complexity are opposites, yet they are both literary legends from the same era. A gentle eulogy, a political rant, an ironic cultural commentary—should they be judged with the same scorecard? And if great writing transcends mere communication to accomplish something higher, isn’t that beyond the realm of a scorecard?

For me, cutting into this thicket of questions is fun. However, it’s the type of fun suited to a personal project, where walking the path can be the reward. I say that because it’s unclear how far, or where, the path can go. Emulating a human editor’s expert judgement of great writing—based on its content, not on source or popularity or social filtering—is technically hard, if not conceptually quixotic. But that’s what makes it fun. And that, in turn, is the answer to “Why?”

Tuesday, April 10, 2012

The Fastest Human in History

A small voice said, “Don’t let me fall, daddy.”

She was on the bike, wobbly, her confidence gone with the training wheels. I was holding her, gently pushing her forward.

“I’m falling!”

“I’m still holding you.”

“Hold me tighter or I’ll fall!”

I don’t remember learning to ride a bike. I only remember the moment of transition, when I realized I was doing it. The memory has no visual component, but I imagine my father trailing off behind as I self-propelled forward.

On October 14, 1947, a B-29 bomber dropped test pilot Chuck Yeager from 20,000 feet. Yeager was in the Bell X-1, a rocket with wings. Clear of the B-29, Yeager lit the engines.

The X-1 shot upward an additional 20,000 feet, accelerating to 0.92 Mach, 92% of the speed of sound. Then the shaking started.

Other pilots had hit this resistance, which they called the sound barrier. It got worse as you got closer to the speed of sound—how much worse at the extreme, no one knew.

The shaking intensified as the Machmeter read 0.93, 0.94, 0.95, 0.96. The X-1 engineers built the plane for this, but even they didn’t know exactly what this would be. The only way to find out was to go there.

Yeager did, as the X-1 blew through its own shock waves, past the speed of sound. A sonic boom echoed across the desert. Inside, Yeager recalled, it became so smooth that “Grandma could be sitting up there sipping lemonade.” At that moment, he was the fastest human in history.

The B-29 that launched the X-1 trailed off, mission accomplished.

We had already done the preliminaries: scooting the bike with her feet, coasting a bit from a small push, and pedaling as I jogged along holding her. All fine. But we were stuck at my letting go while she kept pedaling.

“I don’t want to fall!”

I convinced her it was okay for me to let go a few seconds at a time as she pedaled. Yet when I tried to stretch the counts, she would put her feet down, her shoes skidding the bike to a stop.

She knew she needed to keep pedaling, that more speed meant more balance. But knowing and doing were different things.

Amid growing frustration, a friend of hers happened by. A recent success story on two wheels, the friend had a simple statement: If you want to do it, you can. With that, the friend rode off matter-of-factly.

It was the right message, from the right messenger, at the right time. As she watched the friend ride away, I could see my daughter reframing the problem in her mind. It was no longer about wanting to learn, like at school; it was about wanting to graduate.

In our next pass down the street, she pedaled faster. She trusted me to let go as long she was staying up, allowing my catches to steady her as she continued pedaling. She was beginning to instinctively adjust the front wheel for balance.

Then I was hands-off for five, ten, fifteen strides. “You’re doing it! Keep going!

She did, accelerating.

I kept running with her, a few steps back. In the retelling, I imagine myself trailing off as she self-propels. At that moment, in our little world, she is the fastest human in history.

Sunday, April 1, 2012

New Name, Look, and Features

After 5+ years and nearly 300 postings, this blog is getting a new name, look, and features.

The name, “Words & Numbers,” is what I would have called it from the beginning, had I known what this blog would be about. But I discovered its aboutness along the way.

The new look and features come with a change of blog platform, from TypePad to Google’s Blogger. Most of the features are minor improvements, such as better support of mobile devices and social sharing. However, I also took the opportunity to redo the topic labels, improve the typography, and add a Best Of section.

Finally, for people who follow via RSS: Sorry for the old items in your RSS reader. The platform change caused that. If you mark everything read, all will be back to normal going forward.

Thursday, March 29, 2012

Intelligent Cross-Sell: The CNET Years

After integrating ExactChoice into CNET.com, my main task was to create something new for CNET. That became Intelligent Cross-Sell, a product used by four of the top ten brands in the Internet Retailer 500, among others.

I was part of CNET Channel, since renamed CNET Content Solutions. Its customers are e-commerce sites that sell technology and consumer-electronics products. Its primary product is a detailed database of products. E-commerce sites use this database to display products and specs in a standardized way. For example, if you see a product page for a computer on CDW.com, much of the page’s content is actually from CNET.

Circa 2005, having attracted a large number of e-commerce customers worldwide, CNET was looking for something new to sell them. My job was to determine what it should be and then to build it with my own team.

The industry term for this role is intrapreneur. It can mean anything from “leader of a CEO’s pet-project skunk works” to “random guy building something not elsewhere classifiable on the org chart.” In my case, I was fortunate to have both a specific place in the org chart and a high degree of autonomy. I also had strong executive support.

By choice, I worked as part of a two-person team, with my ExactChoice partner Howard Burrows. We knew how to explore concepts quickly and cost-efficiently, having practiced what today would be called lean-startup techniques since founding ExactChoice in 2002.

At the outset, I talked with dozens of CNET customers about their e-commerce businesses, looking for the pain points we could reasonably address, ranking them by risk and reward. The opportunity that kept winning was a tool to automate cross-selling. Although everyone was familiar with Amazon.com’s “people who bought this also bought that,” tech and consumer-electronics sites could not use it to determine, for example, the right carrying case with a computer.

Among the challenges with “people who bought this also bought that” were:

  • If a few consumers mistakenly bought the wrong-sized case for a computer, the algorithm would start recommending the bad combo, causing a slew of returned products.
  • It was useless for new products without sales history—no people who bought this, then no people who bought that.
  • It left no room for merchandising. For example, as computers began appearing with the Bluetooth wireless standard, cross-selling Bluetooth mice made sense. But how could merchants tell the algorithm to do that when it was only looking backward at the non-Bluetooth past?

Because of these issues, many large tech and consumer-electronics sites were using humans to manually configure cross-sells. These sites had tens or hundreds of thousands of products, changing rapidly. The humans could not keep up. We would later attract two of our early customers—billion-dollar e-commerce sites—by showing them their percentages of empty cross-selling slots.

The beauty of the opportunity was that it played to CNET’s strength. The CNET product database, DataSource, had the size of most computers. It also had the size capacities for most carrying cases. A trivial math operation could prevent a sizing mismatch. This is what the humans were doing in their heads, one product combination at a time. This is what we could do nearly instantly, across an entire product catalog.

In addition to preventing bad cross-sells, we could also enable good ones: Bluetooth mouse to Bluetooth computer? No problem. Match the mouse’s brand with the computer’s brand? Easy. CNET’s database had more than 100 million product attributes to fuel such rules, which would emulate how a person intelligently chooses cross-sells.

Of course, the system would measure itself, so we would have additional data about each product’s sales, its effectivness as a cross-sell, even its behavioral performance in “people who did this also did that.” I liked that, because attribute-driven rules and behavioral data were together likely better than either approach separately.

Finally, the system would need to support hands-on use by merchandisers. Rules would be customizable, in a drag-and-drop way. And reports would link back to rules, so a merchandiser could see which rules caused which numbers.

That was the vision for Intelligent Cross-Sell. We announced the product in February 2006 and released it later that year, with paying customers.

By the first release, we could already see Intelligent Cross-Sell was substantially increasing customers’ cross-selling revenue. We later did case studies with Office Depot and Dell that reported a doubling of cross-sell and upsell revenue. (Upsells are another type of production recommendation that Intelligent Cross-Sell does. Whereas a cross-sell offers a carrying case with a computer, an upsell offers a better computer in place of the one you are considering. When doing this, Intelligent Cross-Sell can automatically generate “pitch text” based on an analysis of each computer’s specs, such as “Faster processor and 50% more storage.”)

Although we hit the market as the housing-bubble-induced recession was starting, we managed to get a decent core of customers in the 2007 to 2009 timeframe. By 2010, among our customers were four of the top ten brands in the Internet Retailer 500’s list of e-commerce sites. We had also gone international, at sites in the United Kingdom, France, Germany, and Denmark. Later, we reached sites in Sweden, Norway, and the Baltics.

As we grew Intelligent Cross-Sell’s revenue, we hired a small team to help evolve and support the product. Things were good in our little world.

But 2010 was a turning point. In the previous few years, several venture-funded startups had emerged as competitors, each with vastly more resources than our small group. They had all started with “people who did this also did that” technology, applying it not just to tech and consumer electronics but to all e-commerce categories. Although Intelligent Cross-Sell was still superior for cross-selling tech and consumer-electronics products, the best start-ups were using their greater resources to offer a broader set of capabilities, with cross-selling and upselling being just one aspect.

We knew the game was changing when, in mid-2010, two customers who had been highly satisfied nevertheless defected to other vendors. The other vendors simply offered more stuff. It was like being a bakery in a town that starts getting supermarkets. Our bread was better, but we didn’t have a deli counter or a produce aisle.

We could have adapted by becoming even more specialized, like an artisanal bakery of cross-selling. But it would have been hard to do within CNET, which had become CBS Interactive when the media giant CBS acquired CNET in 2008. As an enterprise software-as-service player, we were already an outlier of a business within CNET, more so for CBS. I did not want to make us even more marginal. So I concluded that everybody—CBS/CNET, the Intelligent Cross-Sell team, our customers—would be better off if we could partner with one of the other players whose only business was doing what we did.

The right match turned out to be RichRelevance, the personalization company with, by far, the most blue-chip customer base, as well as the most complementary approach to the market. In the partnership, RichRelevance would run the Intelligent Cross-Sell technology and employ the team; CNET would license its data and provide sales collaboration. The best supermarket would now offer artisanal bread.

The deal proved to be a win for everyone. For me, having spent five years on the product, having built a team without ever losing an employee, and having worked directly with every customer, I wanted Intelligent Cross-Sell to continue on the best footing possible. It did, and still is.

Tuesday, January 24, 2012

Meet Omnivark

Along with taking some time off and doing consulting, I’ve been working on a new project:

Omnivark is a highlight reel for the written word. Each weekday we short-list the best new writing on the Web—the kind of writing that delivers such surprise and delight that you feel bad for not having time to find or read it. ;)

Omnivark creates that time for you. It fits the best stuff into an idle moment on your mobile phone or tablet or computer.

From writers famous to obscure, on topics familiar to foreign, Omnivark curates the well-said for the well-read.

I’ll be explaining more about the motivations and technology behind Omnivark soon. In the meantime, please check it out if you’re so inclined. (And if I may tilt your inclination, see last Friday’s edition, first entry, which you are statistically likely to appreciate.)

You can get Omnivark each weekday free via email, Twitter, Facebook, or RSS.