Saturday, December 19, 2009

Klosterman’s Fargo Rock City

On the charts for June 20, 1987, five of the top six albums were by Whitesnake, Mötley Crüe, Bon Jovi, Poison, and Ozzy Osbourne. If this means nothing to you, move along; nothing to see here.

For those still reading, need I even mention Twisted Sister, Quiet Riot, Ratt, Cinderella, and Warrant? Author Chuck Klosterman wants you to remember and care, even if the music was as disposable as the hairspray.

His 2001 book Fargo Rock City salutes the 1980s era of hair metal as something important. He provides a somewhat chronological, sometimes autobiographical, mostly unapologetic tour of topics in his theme’s general vicinity. If that sounds loose, it is.

Klosterman is often funny, and occasionally philosophical, in his defense of musical acts that were critically maligned in their day and have not fared better since. He gets most of his laughs acknowledging, in detail, the ridiculousness of it all (his description of Poison: “three lovely ladies who were actually three guys from Pennsylvania and a dope fiend from Brooklyn”). Yet he saves his true ridicule for those elitists who turned up their noses when teenagers like Klosterman turned up the volume on the likes of Shout at the Devil.

This is where the philosophy comes in. Klosterman thinks that:

[P]op music doesn’t matter for what it is; it matters for what it does. The greatest thing about rock ‘n’ roll is that it’s an art form where the audience is more important than the art itself. Whether or not [Mötley Crüe’s] “Home Sweet Home” was terrific is almost irrelevant; the fact that a million future adults believed it was terrific is what counts.

Klosterman never confronts the question of why 1980s hair metal counts any more than other music that was popular at a certain time. For example, were boy bands of the late 1980s and 1990s (New Kids on the Block, ‘N Sync) equally important to Mötley Crüe because they too held sway with millions of teenagers?

Although you may not be convinced by Klosterman’s larger point, Fargo Rock City is still a rich grab bag of riffs about the significance of stupid stuff. The only catch is, you need to have experienced enough of the subject matter to get the jokes. If not, you’ll probably just find Fargo Rock City sophomoric, without realizing how fitting that is.

Sunday, December 13, 2009

“The Bloomberg of Wind” (Wind Pole Ventures)

Per our occasional theme of doing data better, a company called Wind Pole Ventures wants to be the “Bloomberg of wind.” It has acquired the rights to mount wind-speed sensors on more than 1,000 existing microwave towers across the United States, creating a “wind analytics data network.”

From a CNET News story on the company:

Gathering data at 100 meters (328 feet)—about the same height of wind turbines’ towers—delivers far more accurate information than getting a reading at 10 meters, which is how data is typically gathered now, [CEO Steve] Kropper said.

“Ten states have more than 3 percent wind power in their state and because it’s intermittent, it comes and goes. So wind has the capacity to provide the grid or destabilize it,” he said. “Since there is not storage yet, all we can do is have better predictions for when it blows and when it stops.”

Wind Pole Ventures plans to sell its data and analytics to power companies, wind farm operators and developers, power traders, resource analysts, and government.

Saturday, December 5, 2009

Full-Circle Guitar

I got my first guitar when I was nine years old. It was a $25 acoustic cheapie, made with with woody-looking plastic. Or maybe it was plasticy-looking wood.

The instruction book tutored by way of “How Much is that Doggie in the Window?” and “The Streets of Laredo.” I later found that the fingerings the book demonstrated were needlessly difficult for a beginner, like a G chord that required splitting one’s ring finger and pinky across five strings. At the time, I had grave concerns about the physical impossibility of such things. But I persisted, driven by an urge to somehow make sounds like I heard on KISS Alive.

A few years earlier, a babysitter inadvertently introduced me to KISS Alive. It was a two-record set of bombast that I didn’t understand but instinctively liked. I didn’t realize the band had the makeup-wearing, fire-breathing shtick until later. I just remember being drawn to the big, distorted guitar sound.

The closest thing my parents’ record collection had to the KISS guitar sound was the reprise of “Sgt. Pepper’s Lonely Hearts Club Band,” the first few seconds of which dangled a morsel of that distorted guitar. I played it again and again, not bothering with the rest of the song.

At some later point, my parents reluctantly sanctioned the purchase of KISS Alive, on sale for $4.99. And yet later came the cheapie guitar.

Somehow I taught myself to play the cheapie well enough that my parents indulged me what I really wanted, an electric guitar. It was a no-name Les Paul copy, a budget knock-off of the legendary model used by Led Zeppelin’s Jimmy Page and The Who’s Pete Townshend, not to mention KISS’s Ace Frehely. It came with an amplifier modestly larger than a loaf of Wonder bread. Turned all the way up, helped along by a distortion pedal, the dwarf amp—its brand name was in fact Dwarf—could achieve a junior version of that big guitar sound.

I and some like-minded sixth-graders were in a band that went by many names. A parade of names was inevitable because we spent at least as much time contemplating band names and logos as we did playing music. The name that stuck longest was Zodiac. The logo projected the bottom of the Z under the rest of the letters, terminating with an arrow. It was a classic kid thing, that logo: We took forever creating it, only to end up with a minor variation of The Who’s logo circa 1965.

Our one gig was a Halloween party in 1979. As aspiring crowd-pleasers, we attempted the hit of the moment, The Knack’s “My Sharona.” It was an easy song to play except for the guitar solo, which was almost two minutes long. I could only play the first 15 seconds, so I just repeated that snippet with increasing fervor. The audience, fellow sixth-graders plus siblings, gave it a polite A for effort.

By this time I had gone beyond KISS, graduating to what today would be called classic rock: The Stones, The Beatles, The Who. Although less of the big-guitar sound was required, we’d nevertheless find occasions for heavy power chords, as with The Who’s “Baba O’Reilly”: Imagine a prepubescent singer yelping about “teenage wasteland” while guitar and drums rendered the song’s signature riff in rickety blasts. Despite wreaking havoc on the details, it got an essential something right.

Thinking back, I don’t remember anyone wanting to be a rock star. There was no master plan for fame and fortune. The planning horizon was more like, “Let’s play ‘Barbara Ann’ for thirty minutes, then the last one to the swimming pool has to be Marco in Marco Polo!” There ensued much duck-walking, jumping off chairs, and other theatrics to accompany thirty minutes’ worth of the same three chords.

Through junior high and high school, such jam sessions continued in a slightly more mature manner, with an evolving group of friends. Along the way I ended up with a real electric guitar, a Fender Stratocaster, and a decent amp. I got good enough that most people would come away impressed with my chops. However, I had enough encounters with real musicians that I knew I wasn’t in their league. Those guys loved music, but it was also their job, and I didn’t want that. So music remained a hobby for me.

In college my interests evolved from band jams to electronic music and studio recording. Although I still used the guitar, the center of gravity had migrated to computers and the studio itself as instruments. For variety, I rotated through a few obscure guitar species like an electric 12-string, a fretless bass, and a guitar synthesizer. I also contributed goofball guitar licks to a series of home-recording adventures undertaken by my college housemates and me, wherein various styles of music were plundered for laughs.

But after joining the working world, and especially the start-up world, time for making music evaporated. My gear found its way to friends or was sold. These days, all I have is a single acoustic guitar, a better version of the long-gone cheapie. I’ll occasionally pull it out if I can think of something that might make my three-year-old daughter smile. Usually, that’s something closer to “How Much is that Doggie in the Window?” than anything I later played. It’s an oddly fulfilling way to come full circle. Maybe as she gets older, there will be another loop around.

Monday, November 23, 2009

A Building to Behold: Yale’s Beinecke Library

I was visiting Yale University last weekend when I came across a building that blew me away. From the outside, it’s a formidably modern structure...

...that contrasts with the (literally) old-school architecture of a university founded in 1701.

Lest we judge a book by its cover, let’s look inside. In the middle of the interior is a six-story glass tower. It contains 180,000 rare books.

To protect the books from direct sunlight, the exterior panels are translucent white marble. During the day, subdued light filters through the veined marble.

The platform around the glass tower is an exhibit space that includes an original Gutenberg Bible from 1454.

If you are ever at Yale, visit this building. It is called the Beinecke Rare Book and Manuscript Library. In form and function, it is an impressive monument to the preservation of human knowledge.

[The images are from Wikipedia’s Beinecke Library page. Clicking an image takes you to the original, full-size version from Wikimedia Commons.]

Sunday, November 15, 2009

Blind Man’s Bluff by Sontag and Drew

During the Cold War, the United States and the Soviet Union played a continuous game of cat and mouse under the seas. Submarines prowled the depths, each seeking to be the spy rather than the spied-upon. In Blind Man’s Bluff, Sherry Sontag and Christopher Drew compile first-hand accounts of secret U.S. sub missions that could qualify as thriller fiction but actually happened.

For example, in the early 1970s the surveillance sub Halibut was operating in the Sea of Okhotsk. It was on a mission to tap a Soviet undersea cable, when...

A storm above began boiling beneath the surface. The divers were trapped outside, unable to climb back into the DSRV chambers as Halibut strained against her anchors one moment and slammed into the seafloor the next....Then there was a loud crunch. Both steel anchors snapped at once, broke so easily they could have been rubber bands.

Outside, the divers watched as Halibut began to drift upward. The men were still linked to the submarine through their air hoses. They knew that they would die if Halibut pulled them up before they could decompress. If they cut themselves loose, they would suffocate. Inside, the officer of the deck was well aware of the danger when he shouted a desperate order: “Flood it!” 

He said it a second time. Valves were rolled wide open, and Halibut began to take in tons of water, filling her ballast tanks in a matter of seconds. Belly first, she crashed into the sand. The divers scrambled into the DSRV chamber. 

The horrendous ride was over. But there was no guarantee the submarine would ever be able to break free of the muddy sand.

The book has many such stories of harrowing underwater action. The authors also cover intellectual stories, such as the problem-solving that located Scorpion, a sub that had sunk somewhere along a 3,500 mile swath of ocean.

In sum, Blind Man’s Bluff is interesting and gripping history. I bought it for a long flight and read it straight through.

Saturday, November 7, 2009

Popularity of Words for Numbers

I came across a list of the 15,000 most common words in the English language. The list is from the British National Corpus (BNC), a 100-million-word compilation of spoken and written (British) English from the late twentieth century. The top 15,000 words each had a word frequency, the count of how many times the word appeared in the BNC.

Given such a nice data set, I couldn’t resist asking it a quick and fun question: Do the words for numbers rank the same as their numerical order? That is, would one be more frequent than two, and two be more frequent than three, and so on?

For one through nine, the answer is yes. However, ten through twenty is a different story.

Because ten through twenty includes words with very low frequencies relative to one, let’s redo the above chart with a logarithmic scale for frequency. Now we can better see the relative differences of the lower-frequency words.

Ten, twenty, and fifteen are roundish numbers, so we shouldn’t be surprised by their breaking the pattern. We can also give twelve an exemption because of its prominence in various units (twelve months to the year, twelve inches to the foot, and so on).

Excluding those, eleven and thirteen continue the pattern established from one to nine. But then fourteen and sixteen go the wrong way, exceeding thirteen in popularity. Seventeen gets back in line, with frequency less than thirteen.

Is it only the prime numbers that can remain well-behaved teens? No, the next prime, nineteen, has a higher frequency than not just seventeen but also thirteen and eleven. I suspect that nineteen’s popularity stems from its use in dates, which might also explain eighteen’s place.

I could go on, but suffice to say, multiple factors are at play. So, in the name of stopping while this exercise can still be classified as “quick and fun,” I hereby stop.

For those in need of even more obscure numbers about words, I direct you to The Prime Lexicon, a list of words that are prime numbers when expressed in base 36.

Thursday, October 29, 2009

Onomatopoeia for Books (The Twitter Book by O’Reilly and Milstein)

An onomatopoeia is a word that sounds like what it means—for example, saying “hiccup” sounds like a hiccup. In the same vein, every once in a while I find a book that looks and feels like what it says. For example, Scott McCloud’s Understanding Comics is a comic book that explains the techniques behind comic books.

The Twitter Book by Tim O’Reilly and Sarah Milstein is another example. It’s a smart, friendly, and Twitteresque introduction to Twitter.

Wider than tall, the book presents a topic on each two-page spread: pictures on the left, text on the right. This bite-sizing of concepts and advice reflects the feel of Twitter without being gimmicky.

Although the book is about Twitter, it’s addressed to you. An example: “Twitter gives you two superhero strengths everyone wants: the power to read people’s thoughts and the ability to overhear conversations as if you were a fly on the wall.” If that sounds too breezy, ask yourself how much earnestness you really want from a Twitter how-to book.

Along with making the conceptual sell for Twitter’s goodness, the authors recommend specific things to do with Twitter and, more important, how to do them well. Of course, the Web is rife with Twitter how-to material, free for the clicking. With The Twitter Book you’re paying for a succinct version of the facts plus authoritative advice. It’s like learning the rules of the road from a driving instructor who also knows the coolest places to take your car.

Not to mention, you get that underlying—but not cloying—Twitterness of the book: The look, feel, and tone creatively and satisfyingly reflect the topic, as sure as “tweet” is an onomatopoeia. ; )

Well done, @timoreilly and @SarahM.

Thursday, October 22, 2009

Review: Samantha Power’s Chasing the Flame

Samantha Power’s book Chasing the Flame is about an extraordinary man’s humanitarian service amid conflicts in Lebanon, Cambodia, Rwanda, Kosovo, East Timor, and Iraq. A Brazilian and career United Nations officer, Sergio Vieira de Mello was, in the words of a U.S. diplomat, “the personification of what the UN could be and should be but rarely is.”

Vieira de Mello lived to be in the field, mixing it up with peasants, soldiers, jungle rebels, and presidents alike. He had a knack for charming them all. He took risks to get things done. He was propelled by ideals but could be ruthlessly pragmatic. He was resilient against setbacks, adapting to whatever worked when theory and practice collided. He made mistakes, then learned from them.

Here is a sampling of Vieira de Mello’s assignments:

  • You need to safely repatriate 400,000 Cambodians back into, among other places, land controlled by those responsible for The Killing Fields.
  • You need to provide humanitarian relief in the former Yugoslavia while multiple sides are at war, and your own forces barely can defend themselves, much less the population.
  • You need to create the government of East Timor—including the political system that elects the government—from the smoldering ruins of a brutal occupation.

These kinds of challenges, which no country would touch individually, are what UN humanitarian operations are for. However, UN actions are largely constrained by the funding and politics of donor countries. As Power quotes former Secretary-General Kofi Annan:

Our system for launching operations has sometimes been compared to a volunteer fire department. But that description is far too generous. Every time there is a fire, we must first find the fire engines and the funds to run them before we can start dousing the flames.

Even Annan’s correction is generous. It implies that resources are the only issue. He omitted the part about needing to get consensus from UN Security Council countries about how to fight the fire. That consensus was often lacking, so Vieira de Mello’s marching orders often came with a straightjacket that all but assured failure. Everyone would then blame the UN for being ineffective.

That said, the UN caused many of its own problems. Vieira de Mello was constantly at war with the UN bureaucracy, which had many of the foibles of “big government” without actually being a government. For example, Vieira de Mello could only use his budget for UN personnel and equipment; he could not pay a country’s civil servants or bankroll the repair of an electric grid. As Power notes, “In the economic sphere, these rules ensured that the large UN peacekeeping and political mission managed to distort local economies without being able to contribute to development.”

But for all the UN’s problems, Vieira de Mello loved the organization for its mission. He saw the UN as the main vehicle for nations to collectively do the right thing. Even if that happened infrequently and inefficiently, what exactly was the alternative when masses of people were dispossessed and dying?

Vieira de Mello was widely regarded as a future UN Secretary-General. That destiny died in 2003 when a truck bomb detonated into UN Iraq headquarters, killing Vieira de Mello and 21 others—a final tragedy in a life that swam upstream against torrents of human tragedy. He was 55 years old.

With Chasing the Flame, Power delivers a compelling biography, braided with deft analyses of what worked, what didn’t, and what can be learned from Vieira de Mello’s efforts on behalf of humanity.

Sunday, October 18, 2009

Negative-Space Pumpkin

We were visiting Newport, RI, this weekend and stumbled on the 7th Annual Ballard Park Pumpkin Tour. Imagine a winding, wooded trail with jack-o-lanterns perched in trees, on stumps, along the ground, in clusters, everywhere. In total more than a thousand pumpkins loomed amid the gawking parade of people.

For creative design, my favorite was this negative-space pumpkin:

Below are a few more examples of the variety and quality on display. What a great job by the event’s organizers and the Newport community!

More pictures are on this page, where the main image changes every few seconds.

Wednesday, October 14, 2009

Our Greatest Animal Menace?

A new site, Book of Odds, has an entertaining piece about the relative likelihood of being killed by a shark versus being killed by a vending machine:

The odds a person will die from a vending machine accident in a year are 1 in 112,000,000, while the odds that a person will die from a shark attack in a year are 1 in 251,800,000. One can say with confidence that while vending machines crush an average of 2 to 3 unfortunate Americans every year, the number of recorded US shark fatalities is typically nil.

The author goes on to remind the reader that, despite these numbers, sharks are physically more dangerous than vending machines. So if people abused sharks as often as they abused vending machines, those numbers would be very different.

For comparison’s sake, I consulted the Center for Disease Control’s Compressed Mortality File and found that in 2006:

  • Eight Americans died from “contact with venomous snakes and lizards.”
  • 32 Americans died from “bitten or struck by dog.”
  • 72 Americans died from “bitten or struck by other mammal.”

Thus, by the aggregate mortality numbers, our fellow mammals, especially “man’s best friend,” are our greatest menace (excluding ourselves) in the animal kingdom.

Saturday, October 10, 2009

The Origin of Wealth by Eric Beinhocker

The best parts of Eric Beinhocker’s The Origin of Wealth are (1) an alternate history of economics that argues the foundation is cracking, and (2) a wide-ranging tour of new ideas that Beinhocker calls Complexity Economics. Here is his summary of the old versus the new.

If that table rings your bell, then The Origin of Wealth will be a carillon choir’s worth of bell-ringing for you. At many points, I found myself needing to put the book down to digest the richness of the ideas. (That’s a good thing.)

Less successful is Beinhocker’s attempt to unify the book around a theory of wealth creation—in brief: wealth comes from innovation, which in turn comes from evolutionary processes operating in an economy like Darwinian evolution operates in a biological ecosystem. While there’s a lot to like about this theory, it’s a subset of the Complexity Economics story. As such, it doesn’t really unify the book as much as it confounds itself with other, non-evolutionary aspects of Complexity Economics. Also, Beinhocker is so expansive in exploring Complexity Economics’ implications that he sometimes comes across as throwing stuff at the wall to see what sticks, as opposed to telling us what is sticking.

As a result, the totality of this 450-page book is less than the sum of its parts. However, so many of the parts are excellent that The Origin of Wealth is well worth recommending to those with an inclination toward the subject matter.

Here’s the link to the book at Amazon.

Sunday, October 4, 2009

A Quick Spin with Google App Engine

Google App Engine (GAE) lets you build a Web app on your local machine, then deploy it to Google’s infrastructure. I played with GAE this weekend and was impressed.

The set-up and “hello world” experience was fast and painless. The tutorial and example code were excellent. Less than 30 minutes from downloading the software development kit, I felt like I understood the basics and could start doing my own stuff. It’s a great feeling to learn that quickly. Credit goes to the GAE developers and documenters, who went the extra mile to make things easy for new users like me. Thank you.

For the sake of doing something simple but practical, I wrote an app that automatically keeps my Twitter tweets backed up in the Google cloud. It uses the Twitter API to get my tweets, then stores them in GAE’s datastore so they can be queried and displayed by various fields. As examples of the possible outputs, I created a page that displays just the tweets’ content, and another that dumps every field of every tweet as tab-delimited text.

I got all the above done and deployed in three hours. The app is now running on Google’s infrastructure, checking for new tweets every 30 minutes and storing what it finds.

So, from my brief experience, a small Web app can be developed and deployed rapidly on GAE. Compared to other cloud platforms, which give you more flexibility at the cost of more configuration and administration, GAE seems particularly well suited to quick, small solutions. I suspect that the bigger your ambitions become, the more GAE’s simplifying aspects will become obstacles. However, depending on what you’re trying to do, there may be a lot of headroom until bigger becomes a problem. (In this context, bigger means more functionality, code, and dependencies, not more traffic. For the latter, if you do things the GAE way, your app will benefit from Google’s infrastructure and should handle as much traffic as you can attract.)

For the technically inclined, here are some additional notes:

I used GAE’s sandboxed version of Python. GAE has an equivalent for Java and other languages that run on the Java Virtual Machine.

Instead of having access to a file system or relational database, you use GAE’s datastore for the equivalent of local storage. At first, it feels like an object-relational mapping, where you define a Python class for each kind of entity you want to store. For example, you might define a class Person, with instance variables name, birthdate, and so on. If you create a Web form that allows someone to submit his or her name, birthdate, and other information, your app would take the input and instantiate a Person object, p. Storing it would be as simple as p.put().

However, GAE’s datastore is not relational, so if you go beyond retrieving all objects of type Person, you’ll need to learn some new ways of doing SQLish things. If I had gone deeper here, I’m sure I would have encountered a steeper part of the GAE learning curve. Considering this aspect of GAE is furthest from what most programmers know, it’s an area where the documentation and examples could benefit from being more extensive.

About the development process: You develop on your local machine, using the SDK’s app server and a simulated, local version of the datastore. When you make code changes, you just hit the URL you changed, and the new version will be called. When something breaks, you get prolific debugging info back.

Once you’ve deployed an app, the Web-based management dashboard is surprisingly good, especially the logging UI.

I only saw one inconsistency between the development version of my app running locally on my computer versus the deployed version on Google’s servers: The deployed version’s Twitter API requests were often denied by the Twitter server. This was not caused by the GAE technology. Rather, it was due to other GAE apps on the same IP(s) as my app, pounding Twitter hard enough to cause Twitter’s servers to rate-limit said IP(s). It was guilt by association, cloud-computing-style.

In theory, I could have authenticated my requests to Twitter, thus avoiding the IP limit. In practice, authenticating my requests would have required either including my Twitter password with requests (distastefully insecure) or implementing oAuth (distastefully complex for this little project). So, in the name of “good enough for now,” I decided to let some requests be denied. I found that if the script checked Twitter every 30 minutes, it succeeded often enough to stay reasonably current with any changes throughout the day.

Friday, October 2, 2009

The Arbitrary Precision of a Close Finish

Michael Phelps won his seventh gold medal at the 2008 Olympic Games by one-hundredth of a second—a fraction of an eye-blink, measurable only with precision instruments. In terms of distance, Phelps won by two millimeters.

Writing in Vanity Fair, Steve King uses this context when quoting an unnamed expert in the timing of sporting events:

“We take it for granted that the swimmers are all swimming the same length in the race, but they’re not. The very best construction specs will say, ‘This pool is 50 metres plus or minus one quarter of an inch’… Our ability to build things isn’t nearly as good as our ability to time them.” Consider the implications when Michael Phelps’s seventh gold medal in 2008 was won by two millimetres over a 100-metre race.

[from “End Game” in Vanity Fair, via @pkedrosky]

Tuesday, September 29, 2009

Does the Future of TV Have Two Screens?

In Like Apple, TV Explores Must-Have Applications, The New York Times tell us:

DirecTV and the FiOS service from Verizon Communications have recently announced app stores modeled directly on Apple’s App Store. Just a few applications have shown up so far, but already these few — Bible verses, Facebook updates and fantasy sports team updates — suggest that people may not be content to sit back while watching TV but rather want to lean forward and interact and customize their TVs.

While there may be an audience segment that values FaceBook on TV, I suspect that most people want something similar but different: They want to use FaceBook (or Bible verses or fantasy sports updates) in the same room as the TV. If so, there’s a better way than putting apps on the TV. Put them on the remote control instead.

No, not today’s remote control with all the buttons. That device will soon be the equivalent of a pre-iPhone cell phone. Your future remote control will have a high-resolution touchscreen rather than buttons. It will have WiFi. You will get the TV program guide on the remote, not on the TV. You will make your choices by touching the remote’s screen, and the TV will obey. No more reading text across the room. No more fiddling with arrow keys to plod around the distant screen.

Along with controlling the TV, Remote Control 2.0 will specialize in text-oriented apps—like FaceBook, Bible verses, fantasy sports updates, The New York Times, and so on. That way, the TV can keep doing its thing, displaying big and fast-moving images from across the room, since that’s what it is good at. This combination of far screen and near screen will make a nice division of labor. Media multitaskers rejoice!

Remote Control 2.0 will also enable a new type of app that coordinates with the TV’s content. For example, a baseball game is on the TV screen. The remote has extra statistics, alternate angles, Twitter-style fan commentary, e-commerce if you must have that throwback jersey the players are wearing, and new forms of near-screen/far-screen ads. Different people in the room may have their own remotes, displaying distinct near-screen experiences for the same far-screen program.

Of course, it will take time for apps to realize the possibilities of coordinated, two-screen TV. Normally, this might raise the specter of a chicken-and-egg problem: no apps, no second-screen remotes sold; no second-screen remotes sold, no reason to build apps. But the beauty of the second-screen remote is that it can evolve out of devices that are successful for reasons beyond being remote controls, such as the iPhone, iPod Touch, and their future variants. Today’s iPhone or iPod Touch hardware is already close to enabling Remote 2.0 functionality; if you have Apple TV, the functionality is already there in a limited way. The bigger challenge is enabling everything else necessary for media and apps to coordinate across two screens, but the world has come a long way since Intercast.

So, if you hear the assertion that people want to lean forward and interact more with their TVs, it is worth asking why. If the answer is, “To use apps like those in the App Store,” consider instead a future where people interact less with the TV and more with the remote. You will know that future is happening when people wonder, “When is a remote not a remote?”

[Update, 9/30/2009: I was not aware of it at the time, but the day I posted this, Boy Genius Report passed along a rumor, complete with picture, that Apple has prototyped a touchscreen remote control. If the picture is to be believed, it’s a touch version of the traditional remote-control form factor (long and thin). That’s a step in the right direction. See also MG Siegler’s Touching: All Rumors Point To The End Of Keys/Buttons on TechCrunch.]

Thursday, September 24, 2009

Woe is GDP

The metric Gross Domestic Product has been under fire lately, more so than usual.

I’m not enough of an expert to evaluate the technical aspects of the debate, but this commentary caught my eye:

The basic problem is that gross domestic product measures activity, not benefit. If you kept your checkbook the way G.D.P. measures the national accounts, you’d record all the money deposited into your account, make entries for every check you write, and then add all the numbers together. The resulting bottom line might tell you something useful about the total cash flow of your household, but it’s not going to tell you whether you’re better off this month than last or, indeed, whether you’re solvent or going broke.

Because we use such a flawed measure of economic well-being, it’s foolish to pursue policies whose primary purpose is to raise it. Doing so is an instance of the fallacy of misplaced concreteness — mistaking the map for the terrain, or treating an instrument reading as though it were the reality rather than a representation. When you’re feeling a little chilly in your living room, you don’t hold a match to a thermometer and then claim that the room has gotten warmer. But that’s what we do when we seek to improve economic well-being by prodding G.D.P....

Given the fundamental problems with G.D.P. as a leading economic indicator, and our habit of taking it as a measurement of economic welfare, we should drop it altogether. We could keep the actual number, but rename it to make clearer what it represents; let’s call it gross domestic transactions. Few people would mistake a measurement of gross transactions for a measurement of general welfare. And the renaming would create room for acceptance of a new measurement, one that more accurately signals changes in the level of economic well-being we enjoy.

[From Eric Zencey,  G.D.P. R.I.P., New York Times op-ed, 8/9/2009]

The author of the article does not propose a new measurement, although the latest commission on the subject proposes a path to creating supplementary metrics.

Saturday, September 19, 2009

21st Century Family Photo

My parents live 2,600 miles away, but they see their granddaughter regularly via Skype video call. For no particular reason, I took a screenshot of a recent call. My daughter was holding up a doll to the camera. My dad, on the other side of the country, was pretending to grab it.

Looking at the image afterward, I thought it captured something about our time. It was the visual version of “reach out and touch someone,” and the connection itself was part of the picture.

Tuesday, September 15, 2009

On Twitter

I am on Twitter as stkrause.

There you will find pointers to new blog postings plus smaller bits (interesting quotes, links, and the like) that won’t otherwise get to this blog’s main content.

As a sampler, here are last week’s tweets:

  • “Don’t just write to be understood; write so that you cannot be misunderstood.” — R.L. Stevenson, quoted in
  • Does a pier really need this sign? (pic taken in Sorrento, Italy)
  • “[He] was brainy in a way that didn’t quite add up to smart.” — from Stephen Foley’s postmortem of Lehman Bros.
  • 3-sentence case study on my Intelligent Cross-Sell group’s impact at Dell UK, featuring the phrase “more than doubled”
  • Fantastic book about a true legend: “The Great Siege: Malta 1565” by Ernle Bradford. My review is at
  • From the warmongering politico in “In the Loop”: “We don’t need any more facts. In the land of truth, the man with one fact is king.”
  • Saw film “In the Loop,” a political satire thick with droll/foul repartee. Think Karl Rove + Groundskeeper Willie.

Previously I had not done the Twitter thing because I felt it (and, for that matter, Facebook) represented a preliminary phase of social media more akin to AOL/CompuServe/Prodigy than the open, standards-based Web. I was fine waiting out that phase.

However, Twitter has achieved something interesting. For some people, it has become a replacement for feed reading. Where once they used Bloglines or Google Reader to keep up with their favorite blogs, they are now following, and sometimes interacting with, their favorites via Twitter.

I suppose it’s an honor that such people have cared enough to complain that I wasn’t on Twitter, and it doesn’t look like an open alternative is poised to sweep the world soon. So, I’ve decided to go with the tweetstreaming flow. For those on Twitter, I encourage you to follow me and to suggest any favorites you like to follow.

And for those who don’t want to get their own Twitter accounts but still want to see what I’m saying there, just bookmark this page or subscribe to its RSS feed. You can also find my latest five Twitter postings on the right sidebar of this blog.

Saturday, September 12, 2009

The Good Old Days, Inflation-Adjusted

In my list of factoids about the year 1919, I had some misgivings about including, “Congress reduced the price of a first-class postage stamp from 3 cents to 2 cents.”

It was notable because of the rare price reduction, which has not happened since. However, in our time of 44-cent stamps, it might seem more notable that stamps once cost a few cents. I can almost hear an old record player in the distance, warbling about the good old days.

Cue the misgivings.

If you want to compare prices across a long period of time, you need to adjust for inflation. Considering the general rise in prices between 1919 and now, maybe today’s stamps actually cost less than those of 1919.

It turns out, they don’t. A 1919 stamp would cost 24.7 of today’s cents. That is close to the lowest inflation-adjusted price for a first-class stamp in the past 150 years (21.3 cents in 1920).

But before we resume our tune about the good old days, let’s go further back, to 1878. The inflation-adjusted price of a first-class stamp was 49.2 cents, the high across the past 150 years. So, for stamp prices, the old days had some good and some bad.

Wm. Robert Johnson has a chart showing the price of U.S. first-class stamps from 1866 to 2009, both in prices of the time and inflation-adjusted prices. He also provides the underlying data, which I used in this post.

Sunday, September 6, 2009

Bradford’s The Great Siege: Malta 1565

On a map of the Mediterranean, Malta is a dot of an island in the Strait of Sicily, between Europe and Africa. This position, plus Malta’s natural harbors, made it a naval prize for many conquerors over thousands of years. The Wikipedia article on the subject mentions, in chronological order, the Phoenicians, Romans, Fatimids, Sicilians, Knights of St. John, French and British. Malta achieved its indepenence from Great Britain in 1964.

I visited Malta recently, which led to reading The Great Siege: Malta 1565 by Ernle Bradford. My father-in-law suggested it, saying it told an incredible story. He was right. The book deserves its five-star rating on Amazon across 23 reviews. At a few hundred paperback pages, it is a concentrated dose of military conflict in extremis.

In 1565 Sultan Suleiman the Magnificent, ruler of the Ottoman Empire at its peak, sent a force of 200 ships and at least 30,000 men against 9,000 men at Malta. The goal was to take Malta and destroy the island’s rulers, the Knights of St. John, a Christian religious order that was the sworn enemy of Suleiman’s Islam. The Knights themselves only totaled 600 on the island and, for purposes of this battle, had no naval counterforce. Thus, their only option was to dig into fortresses and repel the Turkish hordes as they came.

The Turks expected Malta to fall in less than a week. But the Knights’ leader, Jean Parisot de la Valette, correctly anticipated most of the Turkish moves, including the bad ones, and exploited them. Most critically, the Turks went straight for the kill, failing to first sever the Knights’ communication and reinforcement lines, both within the island and to the outside world. This mistake allowed the Knights to hold Fort St. Elmo, a weaker fort at the first line of defense, for a month through nighttime reinforcement. In the last days of Fort St. Elmo, the nightly reinforcements knew the goal was not to win but rather to die in the process of prolonging the enemy’s advance. Hundreds of volunteers went willingly.

Here we have a key feature of the conflict. On both sides were holy warriors whose highest purpose was to die in service of the faith. The Turks had the Janissaries, which were something like today’s Special Forces except they were conscripted and trained for this elite role from the age of seven, when they were taken from Christian families living in the Ottoman Empire. Janissaries were subject to the harshest training and discipline, denied marriage or any familial connections, and were singularly forged for war. On the other side, the Knights were elite fighters drawn from the aristocracy of many nations, with hundreds of years of warfare lessons and lore. The Knights had the added fervor of those fighting for their order’s very existence. With these ingredients in the mix, the chance of a limited, gentlemen’s war was nil.

For example, after Fort St. Elmo was conquered, the Turk leaders floated the mutilated bodies of several Knights across the harbor as a calling card. In response, la Valette had Turk prisoners decapitated, then fired their heads from canons back at the Turks. Bradford neither spares such details nor glorifies them, yet he uses them to substantial effect in illustrating the conflict’s brutality.

The book was published in 1961, and Bradford’s battle descriptions have an appropriately old-school, epic quality:

For six hours the Turks attacked, hurling themselves regardless of losses against the thin line of defenders. For six hours the battle swayed back and forth, trembling sometimes in the balance, but always—as the smoke and dust clouds cleared away—revealing the besieged still active with arquebus, cold steel, or artificial fire.

At several key junctures, the Knights could have lost. But through a combination of luck, crafty deceit, and superhuman effort, they withstood months of continuous bombardment, plus regular all-out assaults aimed at delivering the final blow.

The most dramatic turning point was when the Turks burrowed underground and mined one of the last walls protecting the Knights. The explosion breached the wall and surprised the Knights. Seeing the chaos that ensued as the Turks charged the breach, the seventy-year-old la Valette grabbed a pike and personally led the counterstrike, rallying his men to drive the Turks out.

While one might question whether such heroics were exaggerated over the years, the siege was documented in detail at the time, as it happened. Bradford draws from those primary sources. He adds insightful analysis about the strategies pursued, as well as missed, by the various players.

After nearly four months under siege, the Knights prevailed. The Turks had been taking losses on the wrong side of a 4 to 1 ratio. Demoralized, depleted, and increasingly infested with disease, the Turks gave up when they saw Spanish reinforcements for the Knights arrive.

The vastly outnumbered Knights—along with allied soldiers and, near the end, seemingly every man, woman, and child of Malta at the barricades—had beat back one of the most powerful military machines of the time. Although long ago disbanded as a military force, the Knights of St. John are now better known by history as the Knights of Malta.

If it was fiction, the story of the 1565 siege would be a gripping enough tale. As fact, it is a true legend, well told by Bradford’s The Great Siege: Malta 1565.

[Update, 9/7/2009: I didn’t notice that Amazon, where I usually point book links, does not stock the book. The third-party sellers on Amazon start at $57 used, although there is another paperback edition at Amazon starting at $29 used; there is no Kindle edition. Also, Alibris has some used listings starting around $20. Of course, that’s all as of 9/7/2009. If you’ve wandered to this page at a much later date, the prices will be different.]

Saturday, August 29, 2009

1919 Penny

I noticed the coin in a handful of change. It was a penny, a wheatback, its features fading with age. When tilted just so, the coin revealed its date: 1919.

In 1919...

I’ll be sure to spend the coin so someone else might happen upon this connection to a faraway time.

Monday, August 24, 2009

Proactive Safety from Flight Data

In the tragic event of a passenger airplane crash, there is always the search for the flight data recorder, also known as the black box. This device continuously record dozens, sometimes hundreds, of data points about the state of the plane. Analyzing this data is usually key to understanding what went wrong in a crash.

The typical black box retains data for up to a day, recording over the older data. However, some airlines download black-box data between flights, thus maintaining a complete data history for the plane. Doing so opens the possibility of aggregating a huge number of flights’ data to detect problems before they cause a crash.

The industry term for it is Flight Operations Quality Assurance (FOQA). I discovered the topic because I met a pilot from one of the U.S. airlines that has a FOQA program. His job is to train other pilots, and one of his tasks is to bring findings from the FOQA data into the field.

For example, a notoriously difficult airport had an unusual number and variety of problems with landings. Because it was a difficult airport, extra problems were expected. However, analyzed over time, the pattern of problems suggested how the landing procedure could be changed to increase safety—which it was.

More generally, trainers can watch for trends in which crews are getting lax about certain procedures. This allows the trainers to identify where additional training is needed and to assess the effectiveness of that training.

As a frequent airline passenger, it was a pleasant surprise to hear about this proactive use of flight data in the name of safety.

Sunday, August 16, 2009

Weird, Wonderful Comerç 24

The ice cream duo was cod and artichoke, but what could we expect? We were there for the unexpected.

The restaurant was Comerç 24 in Barcelona, founded by a chef formerly of El Bulli, the global shrine of innovative cuisine—think meat rendered as foam, cheese made from almonds, and “Kellogg’s paella” (Rice Krispies, shrimp heads and vanilla-flavored mashed potatoes). You may love those dishes or hate them, but you won’t forget them.

At Comerç 24, we had chosen the tasting menu. It did not deign to say what would be coming, all the better to surprise not just with the taste but with the concept of each dish: Sirloin infused with berries and roses? A winner. Consomme with gelatinized balls of egg, truffle, and parmesan? Intriguing. A tall shot-glass smoothie of mandarin orange, passion fruit, and mint, the flavors stacked like layers in a cake? Wow.

Of course, various foams made appearances. My favorite was the mashed potato foam that accompanied the sea bass. And then there were the gold-dusted macadamia nuts, a simple and satisfying interlude among complex dishes.

In total, we were served seventeen small-plate dishes, which sounds more outrageous than it was. Some were bite-size; others were more substantial yet still relatively petite. It certainly did justice, and then some, to the idea of a tasting menu.

We rated most dishes somewhere between very good and great. All had interesting twists of flavor and texture. A few went too far, like the cod and artichoke ice cream. However, such judgments are relative. The people at the next table claimed to like that dish.

Overall, the meal—or, should I say, culinary experience—was a unique mix of weird and wonderful. So for those in search of something seriously different, and who happen to be in Barcelona, you know where to go.

Monday, August 10, 2009

Penetration Confusion

From “Final Frontier for Wireless Hard to Break Through” in The New York Times: “In Botswana, cellphone penetration exceeds 80 percent, and in South Africa, it has topped 100 percent.”

Why does that sound wrong? We occasionally see percentages higher than 100, but those usually refer to growth rates (“iPhone sales up 150%”). In contrast, penetration implies a part of a whole. A fully penetrated market would be 100 percent penetrated.

So how does South Africa have cellphone penetration above 100 percent? Apparently, the answer comes from dividing the number of wireless subscribers by the population (49 million divided by 47.9 million as of 2008). There are more subscribers than people because some subscribers are double- or triple-counted because they have multiple subscriptions, such as one for a BlackBerry and one for an iPhone. A little Web searching indicates that this calculation is common for the wireless industry, and several countries have penetration above 100 percent.

The wireless industry probably started using penetration as a metric back when wireless phones were the size of bricks and the few people that had one indeed had exactly one. At that time, I doubt wireless executives dared to dream that people would one day carry multiple devices, and thus the traditional concept of penetration applied: Just divide the number of people with phones by the population, and you’ve got penetration that maxes out at 100 percent.

Many years later, the wireless industry is still using penetration—that part-of-the-whole metric—but no one knows what the whole should be. If we just use the population, we’ll get situations like South Africa where the part is bigger than the whole due to some people having multiple phones. However, at this point, changing the definition of the whole would be arbitrary: 100 percent penetration is when everyone has two wireless devices? Three?

Maybe the better answer is for the wireless industry to replace penetration with “wireless subscriptions per capita.” Botswana would get 0.8, and South African would have 1.02. Numerically, it expresses the same thing as the penetration metric, but it does not imply the part-of-a-whole relationship.

And since we’re on the subject, a few additional metrics would be helpful. For example, with penetration (or wireless subscriptions per capita), we don’t know whether Botswana’s 80 percent penetration is due to 20 percent of the population having four phones each or whether 80 percent of the population has one phone each. Something like “percentage of people with at least one wireless subscription” would help. From that and the already-known total number of subscriptions, we could calculate the average number of subscriptions per subscriber. Finally, we could multiply that number by the population without a subscription and, coming full circle, estimate the remainder of the market to be penetrated at that point in time!

Saturday, July 25, 2009

Ondák’s Measuring the Universe

New York’s Museum of Modern Art has a piece called Measuring the Universe by Roman Ondák. It is a hands-on experience in data visualization. Per the MOMA’s description:

Viewers play a vital role in the creation of Measuring the Universe (2007), by Slovakian artist Roman Ondák (b. 1966). Over the course of the exhibition, attendants mark Museum visitors’ heights, first names, and date of the measurement on the gallery walls. Beginning as an empty white space, over time the gallery gradually accumulates the traces of thousands of people.

Below is a photo from Flickr user profzucker that captures the environment well.

There is a dense band of names between and around the average heights of men and women. Above and below, the presence of names gets progressively sparser. At almost 6’2“, I was at the top edge of the dense part.

The piece is at the New York MOMA through September 14, 2009.

Saturday, July 11, 2009

Too Many Italian Restaurants?

I like Italian food, but most cities seem to have an oversupply of Italian restaurants compared to other types of ethnic food (as defined relative to the U.S. food market).

To test this perception, I analyzed OpenTable’s categorized listings of restaurants for Atlanta, Los Angeles, New York, and San Francisco. Italian was the top ethnic category by far, comprising nearly 18% of listed restaurants. A distant second was French at 6.6%. Next was Japanese at 3%. Mexican, Indian, and Thai were each farther down the list, below 2%.

Which begs the question: Does the populace really want three or six or ten Italian restaurants for every French, Japanese, or Indian restaurant, respectively?

Some caveats about the data:

  • I ignored the American categories due to their home-team advantage in the United States.
  • The categories are not cleanly separated. For example, Japanese and Sushi are distinct categories, but apparently a restaurant can only be in one category. So we might safely say that the 3% figure for Japanese is actually 4.4% when we add the Sushi restaurants. Similarly, French could pick up an incremental 1.1% if combined with Contemporary French.
  • OpenTable is a service for restaurant reservations, so it lists higher-end restaurants. That explains the low numbers for a category like Chinese, which has a lot of casual and take-out restaurants.

These and other factors undoubtedly messed with the numbers. But unless the data was totally whacked, the magnitude of the differences between Italian and the other categories seems large enough to point to something real.

For those that like details, dig in...

CategoryAtlantaLos AngelesNew YorkSan FranciscoGrand Total
Contemporary American11.1%9.4%10.3%6.4%9.1%
Mexican / Southwestern0.7%2.2%1.8%1.8%1.8%
Fusion / Eclectic1.8%2.0%1.4%1.2%1.5%
Contemporary French1.1%0.7%1.2%1.3%1.1%
Tapas / Small Plates2.2%0.4%0.9%1.4%1.0%
Global, International1.4%0.4%0.5%0.4%0.5%
Latin American0.7%0.5%0.4%0.1%0.4%
Comfort Food1.1%0.4%0.4%0.1%0.4%
Latin / Spanish0.0%0.5%0.4%0.2%0.3%
Southeast Asian0.4%0.0%0.4%0.5%0.3%
Gastro Pub0.4%0.4%0.1%0.5%0.3%
Brazilian Steakhouse0.7%0.3%0.2%0.4%0.3%
Middle Eastern0.4%0.1%0.2%0.1%0.2%
Creole / Cajun / Southern0.7%0.0%0.1%0.1%0.1%
South American0.0%0.1%0.1%0.1%0.1%
Modern European0.0%0.0%0.1%0.1%0.1%
Indonesian / Malaysian0.0%0.0%0.2%0.0%0.1%
Contemporary Indian0.0%0.0%0.1%0.0%0.0%
Bottle Service0.0%0.0%0.1%0.0%0.0%
South Indian0.0%0.0%0.0%0.1%0.0%
Prime Rib0.0%0.1%0.0%0.0%0.0%
Dim Sum0.0%0.0%0.1%0.0%0.0%

Source: OpenTable listings by city, 6/20/2009

Tuesday, June 30, 2009

Data-Mining 1.6 Million Putts for Irrationality

A recent study by Professors Devin Pope and Maurice Schweitzer of the University of Pennsylvania concluded that professional golfers are more likely to make the same putt if it is for par than for birdie. (Par is the expected score for a hole; anything worse is bad. Birdie is one stroke better than par, which is good.)

Duh. Par putts are usually shorter than birdie putts because the player has used an extra shot to get the ball closer to the hole.

The study only compared par and birdie putts of the same distance. It used a database of 1.6 million putts, accurate to the inch, from Professional Golf Association tournaments between 2004 and 2008.

What about position on the green? Greens have lots of different slopes, so the same-distance putt can be easier or harder depending on where it is. If you took an extra shot to reach your position on the green, as the par-putter did compared to the birdie-putter, you should have been able to pick a more favorable spot.

The authors accounted for that. Position on the green indeed led to variance of putting performance overall but did not have a significant effect on birdie versus par putting performance.

Well, par putts are often second putts, after a player has missed a birdie putt. That means the player has already seen the green’s speed and angle from the first putt. Even if the distance is the same between a birdie and par putt, this extra learning will help sink more par putts.

It does, but it only accounts for 20% to 30% of the difference. The authors controlled for that effect, as well as a player’s learning from watching other players’ putts on the same green.

Hmmmm, sounds like the authors were thorough.

We’ve only covered some of the effects the authors tested and controlled for.

But how widely does the finding hold?

The authors tracked 188 professional golfers. All of them were better at par putts compared to equivalent birdie putts.

How much better?

Other than for the shortest putts, which were rarely missed, the difference varied between two and four percentage points, depending on distance. The best players tended to have less difference, although it was still statistically significant.

And why does this matter?

First, it’s a great example of a rigorous data analysis, the kind that can handle challenges like those above with aplomb.

Second, the conclusions are not just about golf. They have wider applicability to the debate about whether people are predictably irrational. In the golfers’ case, they treated the exact same putt differently, even though it was worth one stroke either way. The authors hypothesized that the golfers were exhibiting a flavor of irrationality known as loss aversion: The players perceived a missed par putt as a loss because par is expected, whereas a missed birdie putt was more like an unrealized gain. Standard economics suggests that people do not act differently based on such perceptions; the study suggests they do, even if they are highly skilled professionals.

Where can I read more?

The academic paper is available, but it’s not exactly a beach read. For a more approachable summary, see this New York Times article.

Sunday, June 14, 2009

New York City’s High Line

In 1980, the trains stopped running along the High Line, a mile-and-a-half of elevated railway in New York City’s Meatpacking district. By 2000, the abandoned rail line’s topside had become a scruffy greenbelt, unseen by those on the city streets below. Photographer Joel Sternfeld documented the High Line then with photos like this.

Around that time, some citizens had an idea: Let’s save the structure from demolition and make it a park. Such things are easier said than done, but they did it. The first phase of High Line park opened to the public on June 9, 2009.

I happened to be in New York this weekend, so I walked the High Line. Although it is now a public space, the High Line still evokes its former self. At many points, stretches of track remain, plants pushing up between the railroad ties.

Below is a photo from Ed Yourdon that gives the feel. Keep in mind, what you’re seeing is three stories above street level.

Congratulations to all those involved with the project. It adds a new dimension to the term urban renewal.

[For further perspective on the High Line’s architecture and landscape design, including a slideshow and video, see Nicolai Ouroussoff’s review in The New York Times.]

Wednesday, June 10, 2009


Below is the “GE basic floodlight 45.” I direct your attention to little “TM” next to the word “basic.” The “TM” is an attempt to assert a trademark on the word “basic.”

From Wikipedia’s Trademark page:

A trademark or trade mark is a distinctive sign or indicator used by an individual, business organization, or other legal entity to identify that the products or services to consumers with which the trademark appears originate from a unique source, and to distinguish its products or services from those of other entities.

Even within the context of floodlights as a product category, it’s unclear how the word “basic” provides any uniqueness or distinction. So how can that “TM” be there?

“TM” indicates an unregistered trademark, which is just a unilateral assertion of trademark. For example, whoever at GE decided that “basic” was worthy of a “TM” did not need to ask a government agency for permission. With an unregistered trademark, the classic Nike slogan applies: you can “just do it.”

But wait, that Nike slogan is a registered trademark, denoted by an R inside a circle. In the United States, it means the U.S. Patent and Trademark Office found the mark to meet the USPTO’s criteria for distinctiveness and such. With this government endorsement, a registered trademark provides a far firmer legal basis for defending a mark.

So the next time you see a dubious “TM,” remember that “TM” is more bark than bite.

Tuesday, May 26, 2009

Meaningful Numbers on Commuter-Airline Safety

From The New York Times, in Pilots’ Lives Defy Glamorous Stereotype, we learn:

[O]f the six scheduled passenger flights that have crashed since Sept. 11, 2001, only one has been from a major carrier. Four, including the one in Buffalo, were commuter flights; a total of 133 people died on those flights. (The fifth, a 50-year-old seaplane in Miami, was in neither category.)

That sounds bad. However, missing from the story is the context necessary to understand the numbers. For example, what if commuter airlines flew four times as many flights as major airlines? Then the 4 to 1 ratio of commuter-airline to major-airline crashes would be expected. (We will ignore the fact that the tiny number of cases makes extrapolating fuzzy at best, and instead be thankful that the data on this topic is sparse.)

To the rescue we have Ben Sherwood’s Wing and a Prayer: How Safe is My Next Regional Plane Flight?, from The Huffington Post. According to Sherwood, commuter airlines fly roughly the same amount of flights as the major airlines. But wait, Sherwood goes the extra step to find what appears to be the more relevant numbers:

For the absolute latest on the risk of death on a regional carrier [Sherwood’s term for commuter airline], I checked with Arnold Barnett, a brilliant MIT professor who happens to be afraid of flying and who specializes in statistics on aviation safety.

Barnett points out that all the news this week about the Continental crash and safety questions about regional carriers have blurred an important distinction between jet and propeller aircraft.

“Historically, the safety record for piston and prop-jet aircraft has not been as good as that for pure jets,” Barnett says. “US regional jet flights have a splendid safety record,” he goes on. “They have suffered only one fatal crash in the past two decades.”

According to Barnett’s analysis, your risk of death on your next regional jet flight in the US is 1 in 30 million. In other words, you can travel every day for the next 82,191 years—on average—before you will die on a regional jet. (For comparison, your chance of dying on your next trip on a major carrier—one of the big airlines—is 1 in 60 million).

Prop-jets—planes with propellers driven by turbo-jet engines—are a different story, Barnett points out. Your risk of death on your next prop-jet flight, he says, is 1 in five million. Yes, the risk is greater than a jet flight, but you can still fly every day for a very very very long time before you run into a problem.

Thank you for the perspective, Arnold Barnett and Ben Sherwood.

Sunday, May 17, 2009

Spread the Word: Affordance

From one of of our bathrooms at home, the shower/tub controls are pictured at left.

The handle pointing to five o’clock regulates the water flow. The knob pointing to one o’clock sets the temperature. But how do you activate the shower?

Look at the picture again. What do you think?

The answer is, you pull down on the tub spout’s ring, where the water comes out. This plugs the tub spout and directs the water up to the shower head.

Functionally it makes sense, but the only thing obvious about this feature is its need for a better affordance: a design element that shows the user what to do.

The computer-user-interface expert Don Norman popularized the term affordance in his book The Design of Everyday Things. One of his classic examples was the door that you pull, only to find that you need to push it. Why did you pull? Probably because it had a pull-style handle, when it should have had a flat plate. The latter can only be pushed. (A rule of thumb: If a door needs a sign to tell you “push” or “pull,” it could use a better affordance.)

Although the term sounds jargony, affordance provides a shorthand for thinking and talking about what would otherwise be referred to as, “something that shows you what to do with it.” The term is used primarily by certain types of designers. We would all benefit if a wider array of professionals understood and internalized the term. Spread the word.

Saturday, April 25, 2009

100 Years of Change

There’s a paradox in reading a book like George Friedman’s The Next 100 Years: A Forecast for the 21st Century. The author predicts a future event—for example, global war between the United States, Turkey, Poland, and Japan in 2050. The author makes a case for why it will happen. Given the extreme number of possible futures 40 years from now, the chances of that specific thing happening—or even something close to it happening—are remote, even considering that history has patterns, and events are far from random. Yet because the author has made the case, the reader is in the position of considering why that remote possibility won’t occur.

If you find that fun, or would appreciate an instigation to think long term, you might like the book. Just be prepared that war, and the balance of power enforced by the threat of war, is Friedman’s primary engine of history. He covers other factors such as demographic trends, but his main lens is geopolitical—think the board game Risk, tilted heavily toward Team USA.

For me, the book was worth finishing, but I admit to ever-increasing bouts of skimming as the future became farther flung. Ironically, my biggest takeaway from the book was its brief historical survey of how much things changed, in terms of global power, in the twentieth century. Friedman’s version is good, albeit overly dramatic in places. I’d do it a little different:

  • 1900: Europe was the world’s power center, with the major players at peace.
  • 1920: World War I ended the Austro-Hungarian, Russian, German, and Ottoman empires, resulting in a collection of fragmented, diminished states. Meanwhile, the United States was growing stronger, and Russia had just turned to communism.
  • 1940: Germany was back, invading and intimidating its way to European dominance. Russian communism had consolidated and extended its power as the Soviet Union, which (for the moment) was allied with Germany. The United States was trying to avoid direct involvement in Europe’s battles while providing backdoor aid to Great Britain’s defense against Germany.
  • 1960: World War II left Europe split between the United States and the Soviet Union, relegating the former European masters to secondary players. Japan’s defeat in World War II left the United States as master of the Pacific. China had gone communist and opposed the United States in the Korean War.
  • 1980: First among superpowers, the United States was showing weak spots. With Soviet and Chinese backing, North Vietnam drove the United States out of Vietnam; the oil cartel OPEC demonstrated its power over the U.S. economy; and an Islamic revolutionary movement was rolling back American influence in Iran.
  • 2000: The Soviet Union collapsed. China was embracing capitalism. The United States was now the world’s only superpower. It and the second tier of major players were at peace, having presided over a long economic expansion.

That’s quite a hundred-year cycle.

Sunday, April 12, 2009

Truth from the Technical Trenches

In a recent interview with Charlie Rose, Google CEO Eric Schmidt explained how Bill Joy gets investment ideas:

I have a friend who is a venture capitalist, Bill Joy, who described how he does venture capital. He uses Google to search for all the new ideas....He starts off — he starts off with a search. I’m interested in hydrodynamics. And he learns by digging — by repetitive searching until he finds the [technical] papers that are authoritative. He looks for who the authors are, and he calls the authors. These are people no one ever calls. So they return his call. [laughter] Right? And that’s how he learns.

[The video and transcript of the interview are at this TechCrunch page.]

A long time ago, when my job was to forecast the future of digital media, I used a method similar to Joy’s. It was the early 1990s, before anyone knew what the Web was. The big issue of the day was interactive television (ITV). Cable and phone-company CEOs clambered to tout how amazing their new services would be. In lieu of actual services, they had fancy mock-ups. However, the only reality these mock-ups demonstrated was the press’s ravenous hunger for stories about ordering a pizza via your TV’s remote control.

As you might guess, the CEOs and marketing types were pushing the big ideas and alluring mock-ups. Meanwhile, out of the spotlight, squads of engineers were scrambling to make the network, server, and set-top technologies fulfill the vision. Every big phone and cable company had a field trial planned. It was a race to the promised land.

During this hubbub, a reporter from the Wall Street Journal interviewed me about prospects for an upcoming trial by the phone company US West, which planned to use set-top boxes adapted from the video-game company 3DO. The trial was planned for a certain date, and the reporter had asked me how realistic that date was. I said, and was quoted in the Journal, that it was not realistic.

This perspective contradicted all the publicly stated information about the trial, and it made some waves at the time. However, it proved correct, as I knew it would. Why? Because I had been talking to several engineers involved with the trial. Per the Bill Joy story, these were people that no one from the outside world called. But once they realized I could talk their language, and that I knew what was happening in other trials, mutually beneficial conversations ensued.

Like all the other engineers on various trials, the US West / 3DO engineers were not only behind schedule but continuing to slip due to the need to improvise much of the technical work as they went. I’d seen this story many times, and it always got worse before it got better. Thus, the chances of the trial hitting its target date were essentially zero.

With other trials, I’d sometimes find an engineer that tried to recite the company line. But if reality was different, it was easy to detect. For example, a CEO might have claimed that his trial’s set-top box would cost $300 as soon as volume ramped up. But an engineer would be hard pressed to hold that line when presented with a components list that totaled something like $3,000, where most component prices already reflected volume pricing for other existing applications.

That was the beauty of talking to technical people in the trenches. They were closest to the truth of what was actually happening, and their normal inclination was to be realistic in the face of facts. In a commercial world biased toward the production and distribution of hype, this perspective was often a useful corrective and occasionally, when something really was going to work, a powerful confirmation.

Saturday, March 28, 2009

Hyundai Assurance

From the carmaker Hyundai:

A decade ago Hyundai pioneered America’s Best Warranty™. Now we’re providing another kind of confidence. Finance or lease any new Hyundai, and if in the next year you lose your income*, we’ll let you return it. That’s the Hyundai Assurance.

At Hyundai we think it’s easier to find a job when you’ve got a car. That’s why, for a limited time, we expanded Hyundai Assurance, and we’ve added...something extra. A plus, as in Hyundai Assurance Plus. If you lose your income, we’ll make your payments for 3 months while you get back on your feet, and if that’s not enough time to work things out, you can return the car with no impact on your credit.

As a marketing program, I admire Hyundai Assurance for its creative, insurance-like value proposition. If nothing else, it has attracted media attention and gotten people talking about the Hyundai brand. It also recognizes the effect of tough times and lets Hyundai say, “We get it.” Contrast that with the current media image of American auto executives, which is something like:

[Image by Randy Bish of the Pittsburgh Tribune-Review]

Perhaps most interesting is this observation from Rob Walker, writing in the New York Times Magazine:

As of early March, no Hyundai buyer had yet returned a vehicle bought under the Assurance umbrella. This raises the intriguing point about what sort of consumer is being reassured. Probably anybody who is really afraid of losing a job simply isn’t going to buy a car right now. But somebody whose insecurity is more abstract, who perhaps simply needs a rationale for a big-ticket purchase at a moment when the headlines are full of doom — that’s different.

The program started in January 2009, and participants must make at least two monthly payments. So the window has been short for buyers to return a car or miss payments in an allowable way. Still, as of March 23rd, the blog Kicking Tires reported:

[H]yundai spokesman Dan Bedore confirmed that so far no one has used the program. It’s still early in the plan’s lifecycle and final March figures have not come in, but the fact that no buyer has taken advantage of it says that at least the 55,133 people who bought a Hyundai this year probably still have their jobs.

While most automakers’ sales were down in early 2009, Hyundai’s were up.

Sunday, March 22, 2009

Managing by Measuring the Right Things

“If you can’t measure it, you can’t manage it.” This popular business aphorism holds much wisdom. But it should come with a warning: If you manage by what you can measure, you better be measuring the right things.

For example, the computer manufacturer Dell was using “handle time”—how long a representative spent per call—as a key metric in managing its call center. In the quest for efficiency, Dell tried to reduce handle time by compelling reps to make calls shorter. The unintended result? Reps simply transferred the difficult calls around.

From a 2007 BusinessWeek article by Jeff Jarvis:

At Dell’s worst, more than 7,000 of the 400,000 customers calling each week suffered transfers more than seven times....

“It was a real mess,” confesses Dick Hunter, former head of manufacturing and now head of customer service. Dell’s DNA of cost-cutting “got in the way,” Hunter says. “In order to become very efficient, I think we became ineffective.”

Dell changed the key metric to the total minutes necessary to resolve a problem. This change aligned Dell’s and its customers’ interests in minimizing the time to solve problems—as opposed to minimizing any single call’s duration, which was easy to measure but ultimately served nobody’s interest.

Although obvious in retrospect, this kind of issue is hard to see within a company while it is happening. If you don’t believe me, let’s take an example that affects everyone in the United States and, to some extent, the world.

Have you ever thought twice about the “hundred days” yardstick that is routinely applied to new presidents? Made famous by Franklin D. Roosevelt’s flurry of activity upon taking office in 1932, a president’s first hundred days is often seen as a predictor for the president’s subsequent success or failure.

Writing in The Wall Street Journal, David Greenberg, a professor of history and media studies at Rutgers University, argues that the hundred days yardstick is problematic:

It places too much emphasis on easily quantifiable early achievements, directing attention to the number of laws passed. Passing laws isn’t necessarily the best indicator of a strong presidency. When a president’s party controls the Congress, it’s easy for him to sign bills that were queued up before he arrived — something that may hearten his supporters but doesn’t attest to great vision or legislative prowess.

Many things can matter more than laws getting passed. Behind Eisenhower’s lackluster debut — he sent no domestic program to Congress — lay an important bureaucratic reorganization and a review of national security strategy that led to his “New Look” foreign policy....

A president may also have a successful hundred days due to events outside his control. Reagan was struggling to pass his tax cuts when John Hinckley’s bullets landed him in the hospital. The outpouring of sympathy, aided by Reagan’s winning bedside humor, buoyed his popularity and helped him win a big victory. But that success didn’t foreshadow any continued mastery of Congress; his relations with the Democratic House and, later, the Senate would deteriorate.

Greenberg demonstrates that incoming presidents since FDR have been acutely aware of the hundred-days milestone. This awareness can push presidents to run before they walk, trying for legislative wins in the time period when they are least experienced at running the machinery of government.

Granted, some presidents take office when major action is necessary. FDR was the prototype, and Obama faces similar conditions, albeit less extreme. But judging even Roosevelt after a hundred days would have shown he did a lot of things, not whether those things were productive. It also would give no indication of Roosevelt’s ability to lead in the geopolitical context that became World War II—something that contributed to his legacy as much as his response to the Great Depression.

So, while judging the first hundred days has become a tradition, it’s worth asking what we really learn from it, and whether we want our presidents to manage to that milestone, especially when circumstances don’t force quick action.

The larger point: The seeming clarity of managing by measuring can mask the subtleties—or sometimes the outright counterproductivity—of the measurements involved.