mardi 25 février 2014

Herd Instincts Are Biasing Online Reviews

Social influence can lead to disproportionately positive online ratings, thanks to the human impulse to think and act in the same way as people around us.


How reliable are consumer-generated online ratings?


Less reliable than you might hope, it turns out.


Recent research suggests that online ratings — the opinions people post at websites about everything from restaurant experiences to hotel recommendations to product reviews — are systematically biased and easily manipulated.


And the bias tilts positive: It turns out that after we see other positive reviews, many of us write reviews that are more positive than we planned to. “When we see that other people have appreciated a certain book, enjoyed a hotel or restaurant or liked a particular doctor — and rewarded them with a high online rating — this can cause us to feel the same positive feelings about the book, hotel, restaurant or doctor and to likewise provide a similarly high online rating,” writes Sinan Aral in The Problem With Online Ratings, in the Winter 2014 issue of MIT Sloan Management Review.


“The heart of the problem lies with our herd instincts — natural human impulses characterized by a lack of individual decision making — that cause us to think and act in the same way as other people around us,” Aral writes. Those herd instincts combine with our susceptibility to positive “social influence,” he argues.


In his article, he describes a simple experiment he conducted with two colleagues on a social news-aggregation website (similar to reddit.com, he says). On the site, users rate news articles and comments by voting on how much they enjoyed them. “We randomly manipulated the scores of comments with a single up or down vote,” he writes, where “up” means “yes, I enjoyed the comment,” and “down” means “no, I didn’t.” They then measured the impact of these small manipulations on subsequent scores.


One of the five key results: The “positive social influence bias” that resulted from the initial tilt toward a positive comment increased the likelihood of positive ratings by 32%. It persisted over five months. Ultimately, the initial positive comment increased the comments’ final ratings by 25% on average. Negatively manipulated scores, meanwhile, were offset by a “correction effect” that neutralized the manipulation: “Although viewers of negatively manipulated comments were more likely to vote negative (evidence of negative herding), they were even more likely to positively “correct” what they saw as an undeserved negative score,” Aral notes.


Initial positive comments — deserved or not, planted or not, posted anonymously by someone with an interest in the product or not — have the potential to create an undeserving online superstar. “Positively manipulated scores were 30% more likely than control comments (the comments that we did not manipulate) to reach or exceed a score of 10,” Aral writes. “And reaching a score of 10 was no small feat; the mean rating on the site is 1.9. A positive vote didn’t just affect the mean of the ratings distribution; it pushed the upper tail of the distribution out as well, meaning a single positive vote at the beginning could propel comments to ratings stardom.”


Bottom line? Both consumers and executives reviewing online feedback should take positive online ratings with a grain of salt. And managers “should encourage and facilitate as many truthful positive reviews as possible in the early stages of the ratings process,” Aral writes. “Systematic policies to encourage satisfied consumers to rate early on could change the minds of future consumers to feel more positively toward the products or services.”


This article draws from The Problem With Online Ratings, by Sinan Aral (MIT Sloan School of Management), which appeared in the Winter 2014 issue of MIT Sloan Management Review .






via Business 2 Community http://ift.tt/1dtkVj9

Aucun commentaire:

Enregistrer un commentaire