I know I'm preaching to the converted here in complaining about mainstream wine reviewers in general and numerical ratings in particular, but this just flabbergasts me.
Jay Miller, who I gather is Robert Parker's long time drinking buddy, just started doing reviews for Wine Advocate in 2007. In less than a year, he has managed to find no less than 8 "100 point" - presumably "perfect" - wines. And, I should add, more than another 40 wines worthy of being rated 98 points or above. This is basically in about 3 rounds of tastings, I believe.
In the narrative commentary on the current reviews of 2004 and 2005 Oregon wines, he suggests that neither are particularly strong vintages. As for 2004 "The resulting wines tend to be light and forward, with the best of them possibly evolving for a few years and becoming user-friendly, seamless, and easy to drink. Almost none of them will make old bones." 2005 likewise would sound like a pretty mixed vintage: "In 2005, all too many Willamette Valley Pinot Noirs lack that balancing fruit. Almost all of them reveal lower alcohol, elevated acidity, and firm structures. Only the top examples have enough fruit to merit cellaring. Most of them will need to be drunk near-term while the fruit remains intact. However, a number of the top producers were able to solve the problems of the vintage, hit a few home runs, and make some outstanding wines. These will be worth buying and cellaring." You would think from this less than stellar description that there shouldn't be that many wines that get highly rated, only perhaps a few top producers.
Well ... of 177 wines that were rated, 93 - more than half - were rated 90 points or higher. (Supposedly, a rating of 90 points or higher indicates "An outstanding wine of exceptional complexity and character. In short, these are terrific wines."). That sure seems like more than a "few home runs." All but 16 of the wines reviewed (90%) were rated 87 pts or higher. The comments do indicate that there were about another 100 wines tasted that did not get reviews, but how can these ratings at all be correlated with these supposedly being weak vintages?
Just one more lesson on why numerical ratings are arbitrary and pointless.