Houston research: How social pressures are affecting digital product evaluations
houston voices
Theoretically, product evaluations should be impartial and unbiased. However, this assumption overlooks a crucial truth about product evaluators: They are human beings who are concerned about maintaining credibility with their audience, especially their peer evaluators.
Because evaluators must also care about being perceived as legitimate yet skillful themselves, certain social pressures are at play that potentially influence their product reviews.
Research by Minjae Kim (Rice Business) and Daniel DellaPosta (Penn State) takes up the question of how evaluators navigate those pressures. They find that in some cases, evaluators uphold majority opinion to appear legitimate and authoritative. In other contexts, they offer a contrasting viewpoint so that they seem more refined and sophisticated.
Pretend a movie critic gives an uplifting review of a widely overlooked film. By departing from the aesthetic judgments of cinema aficionados, the reviewer risks losing credibility with their audience. Not only does the reviewer fail to understand this specific film, the audience might say; they fail to understand film and filmmaking, broadly.
But it’s also conceivable, in other situations, that the dissenting evaluator will come across as uniquely perceptive.
What makes the difference between these conflicting perceptions?
Partly, it depends on how niche or mainstream the product is. With large-audience products, Kim and DellaPosta hypothesize, evaluators are more willing to contradict widespread opinion. (Without a large audience, contradicting opinions are like the sound of a tree that falls in a forest without anyone nearby to hear.)
The perceived classiness of the product can affect the evaluator’s approach, as well. It’s easier to dissent from majority opinion on products deemed “lowbrow” than those deemed “highbrow.” Kim and DellaPosta suggest it’s more of a risk to downgrade a “highbrow” product that seems to require more sophisticated taste (e.g., classical music) and easier to downgrade a highly rated yet “lowbrow” product that seems easier to appreciate (e.g., a blockbuster movie).
Thus, the “safe spot” for disagreeing with established opinion is when a product has already been thoroughly and highly reviewed yet appears easier to understand. In that case, evaluators might sense an opportunity to stand out, rather than try to fit in. But disagreeing with something just for the sake of disagreeing can make people think you’re not a fair or reasonable evaluator. To avoid that perception, it might be better to agree with the high rating.
To test their hypotheses, Kim and DellaPosta used data from beer enthusiast site BeerAdvocate.com, an online platform where amateur evaluators review beers while also engaging with other users. Online reviewers publicly rate and describe their impressions of a variety of beers, from craft to mainstream.
The data set included 1.66 million user-submitted reviews of American-produced beers, including 82,077 unique beers, 4,302 brewers, 47,561 reviewers and 103 unique styles of beer. The reviews spanned from December 2000 to September 2015.
When the researchers compared scores given to the same beer over time, they confirmed their hypothesis about the conditions under which evaluators contradict the majority opinion. On average, reviewers were more inclined to contradict the majority opinions for a beer that had been highly rated and widely reviewed. When reviewers considered a particular brew to be a “lowbrow,” downgrading occurred to an even greater extent.
Kim and DellaPosta’s research has implications for both producers and consumers. Both groups should be aware of the social dynamics involved in product evaluation. The research suggests that reviews and ratings are as much about elevating the people who make them as they are about product quality.
Making evaluators identifiable and non-anonymous may help increase accountability for what they say online — a seemingly positive thing. But Kim and DellaPosta reveal a potential downside: Knowing who evaluators are, Kim says, “might warp the ratings in ways that depart from true objective quality.”
------
This article originally ran on Rice Business Wisdom and was based on research from Minjae Kim, assistant professor of Management – Organizational Behavior at Rice Business, and Daniel DellaPosta, associate professor of Sociology and Social Data Analytics at Pennsylvania State University.