In a hardball interview in today’s New York Times business section, writer Kermit Pattison grills Yelp CEO Jeremy Stoppelman about his company’s alleged “extortion” schemes. The issue, of course, is the national class action lawsuit against Yelp from local business owners who claim Yelp manipulated their reviews to get them to advertise, moving bad ones to the top if they didn’t, and good ones to the top if they did. (For more in-depth reading on the allegations, Oakland, California alternative newsweekly the East Bay Express published a fascinating story last year)

Stoppelman has been trying to shiny up his public image recently, and in today’s interview he makes a valiant effort. So, does his company extort people? “Absolutely not,” says Stoppelman. Yelp created an “automated and algorithmic review filter” that “doesn’t take into account advertiser status, and works the same for everyone.” In fact, this remarkable algorithm “ensures that the customer sees…trustworthy information.” Damn, this algorithm surely sounds like the bomb. “The coffee shop across town doesn’t like you…and wants to write negative reviews…We’re trying to catch that and remove it automatically.”

Extortion issues aside, “ensuring trustworthiness” on a review site seems like a pretty tough task to automate. That’s an understatement. A lot of time and energy is spent by human moderators on our user-generated review site, Chowhound, trying to weed out suspiciously great reviews that have, in fact, been written by the restaurant’s own staff or PR firm. The tales of how these shills are uncovered and banished often read like mini gumshoe detective stories, with somewhat less exciting plot twists: “And then we noticed that the email address of the reviewer was an anagram of the restaurant name!” In any case, I’m no computer whiz, but is it possible to write an algorithm that can keep up with human beings’ remarkable ability to lie and deceive?

See more articles