Decrying wine critics, publications, and competitions that use numerical ratings to measure a wine’s worth has been the fashion for a while — but now a few academic types are using research to back up the rancor.
Robert Hodgson is one such academic. As the Wall Street Journal recounted in great detail over the weekend, this retired statistics professor-turned-vintner joined the advisory board of the California State Fair’s famed wine competition a few years back and subjected the competition’s judges to a controlled scientific study. Every year for four years, 70 judges were presented with a blind tasting of approximately 100 wines over a two-day period, with each wine being served three times. Results showed that, on average, each judge’s ratings of the same wine varied by +/-4 points, with a different rating each time it was tasted.
Hodgson followed up on that study, which was published earlier this year in the Journal of Wine Economics, with a broader look at multiple wine competitions, and the statistical odds that any one wine will win a gold medal in any competition. To quote the Journal, “The medals seemed to be spread around at random, with each wine having about a 9% chance of winning a gold medal in any given competition.” The results of the second study were published in September in a newsletter called the California Grapevine.
As recently as last week, ratings skeptics found backing in the pages of the trade journal Psychological Science, which featured Brock University business professor Antonia Mantonakis’ article “Order in Choice: Effects of Serial Position on Preferences.”
Mantonakis and her coauthors subjected volunteers to yet another blind tasting, with each taster sampling five glasses. The first glass tasted repeatedly earned higher marks than the second and third among novice tasters, while wine experts preferred the fourth or fifth glass. The kicker? Unbeknownst to the tasters, all glasses contained the same wine.