Red Wine Bias? More Thoughts and Questions
Do wine critics display a red wine bias? In a pair of recent articles, Wine Curmudgeon argues that they do and encourages you to agree. He published data analysis he and a colleague did which, he says, supports this conclusion. I have a number of issues with the conclusion and their methodology. I wrote “Red Wine Bias Among Wine Critics” last week in which I pointed out some flaws in their thinking. Today, I detail even more issues.
Transparency and Opportunity for Peer Review
It is easy to make inadvertent mistakes in research. It is also common to overlook possible flaws or biases in the processes of research and analysis. And the scientific method dictates that, before a conclusion based on observation can be considered fact, the observation need be consistently repeatable, preferably by other researchers. Therefore, in academic research it is customary—perhaps mandatory—to be transparent with respect to data and the data generation process.
In the world of politics, lobbying and business, where someone is trying to sell a particular conclusion and/or maintain a competitive advantage, raw data and methodologies are kept close to the vest. This makes it difficult for others take issue with the research process or disprove the conclusion. The goal is selling, not discovering. This is the approach taken, intentionally or not, by Wine Curmudgeon.
Here are my concerns:
- The number of data points is disclosed, the source is not. We are told the reviews were provided “on the condition of anonymity.” This means we will never see the reviews or even know where they came from.
- The reviews are said to come from “major wine magazines.” That means the reviews were published. Why would magazines need anonymity for providing reviews they have already published?
- We have no idea who the reviewers represented are and in what percentages. Scoring is intended to be objective, but critics don’t agree on every wine. (That’s why there’s room for more than one critic.) Distribution of scores may vary wildly depending on whose ratings you consider.
- We are told there were 14,885 white scores and 46,924 red scores. That is a significant imbalance.
- We are not told what the breakdown is by style or varietal. Varietal mix will have a considerable impact on average scores.
- We are told “the scores do not include every wine that the magazines reviewed, so the data may not be complete.”
- We are told “the data was not originally collected with any goal of being a representative sample.” Mind boggled. We are being sold on a conclusion with data as “evidence,” yet the researchers admit that, not only was no effort exerted to provide representative data, having a balanced data set was not even a goal.
- We are told the data set includes “scores dating from the 1970s.” Yet no data from years prior to 1994 are represented in any of the charts provided. Where are the old reviews? What do they say? Is there a significant number of such reviews or was the notion of such reviews just fed to us in order to make the data seem broader and more authoritative than it is?
- Figure 5 in the Wine Curmudgeon report, “Expert Scores and Red Wine Bias: A Visual Exploration of a Large Dataset,” seems to indicate that 100% of red wine scores are greater than 90 points. Opening any magazine with wine reviews will provide plenty of proof that red wines still score less than 90 points or less on a regular basis. This point alone suggests problems with the raw data.
‘Red Wine Bias” Appears to Have Decreased Over Time
For the moment, let’s assume the data presented are somehow representative and have value. Figure 5, referenced above, clearly shows the difference in scores between red and white—the theoretical “bias”—has decreased substantially over time. According to that chart,
- In the mid-90’s, 60% of red wine scores were above 90 points while only about 35% of white scores were.
- By about 2006, the gap had closed. 90% of red wine scores topped 90 but 81% of white scores did as well.
- In 2012, 92% of reds got more than 90 points, but so did 90% of whites.
- The gap increased for 2015, but, given the “data” asserting 100% of red wines got more than 90 points in 2015, I don’t believe that data has enough integrity to be worth considering.
To me, the Wine Curmudgeon articles are pushing a point of view, that critics are biased against white wines. In doing so, WC is missing a much more interesting finding, a revelation that could inspire valuable explorations. White wines today are reviewing substantially better than they used to. In just 10 years, the percent of white wines scoring better than 90 points has gone from 35% to 90%. Why is that? Has the “red wine bias” disappeared?
Not All White Varieties, or Red Varieties, are Equal
In his latest article, Wine Curmudgeon is “sad” about the notion he found in some comments that serious scores should only be given to serious grapes. He relates that idea to a non-egalitarian notion expressed by the final remaining commandment left on the barn in Orwell’s Animal Farm, “All Animals are Equal But some animals are more equal than others.”
The inference that all grape varieties should be considered equal is kind, but overly generous. Different grape varieties have different characteristics. These lead to differences in body, acidity, alcohol, texture, aroma, flavor, complexity, longevity, etc. Terroir, viticultural choices and winemaking technique also have substantial effects.Those differences are a big part of what enthusiasts love about wine. It leads people to try many varieties and travel to multiple wine regions.
Different wines, or varieties, are suited for different purposes based on their particular characteristics. If I want a wine to go with oysters, I might select a Muscadet (made from Melon de Bourgogne). Riesling would not be my first choice. That does not mean Melon is a superior grape to Riesling.
If one is considering the wines on their own, in the absence of pairings, the opposite is true. Melon works with oysters because it is relatively light in body, flavor and texture. It’s gentle freshness and simple, delicate flavors complement, rather than obliterate, the charm of an oyster.
But Riesling indisputably has more capacity for body, complexity, evolution over time and intensity in aroma and flavor. Common sense, and pairing suggestions, tell us that, despite a Riesling’s 95-point score, it will likely overwhelm shellfish. That does not mean the score is invalid or that Melon is Riesling’s equal.
Which Red and White Varieties are Represented in the Study
The same is true in comparison’s among red wines. Dolcetto can be a delightful wine, increasingly so in recent years. But the greatest Dolcetto made does not rival the 100th best Nebbiolo in complexity and aging capacity. The same can be said about Zinfandel, no matter how much I like that grape, relative to Cabernet Sauvignon.
We have not been told which scores in the WC study pertain to which varietals, or even what the overall distribution between varieties is in the data. If the red wine reviews are heavy in Cabernet Sauvignon and Pinot Noir, that will skew average scores upward for red wines. And that does not demonstrate bias of reds over whites, it represents a greater—and more frequently realized—potential for quality in Cab and Pinot.
To Sum Up
I believe in science, in research and in having an open mind. I am always ready to change my mind if evidence suggests I should. The Wine Curmudgeon report does not prove there is a red wine bias. The data don’t appear to be representative, meaning that whatever tendencies they may reflect should be questioned. The analyses disregard significant variables. And, if anything, the data show that any red wine bias which did exist has nearly disappeared.
Copyright Fred Swan 2016. All rights reserved.
+ There are no comments
Add yours