When the stars don’t align: CNET user ratings don’t add up

CNET user ratings don't add up

CNET user ratings don't add up

In the age of social shopping, we place a tremendous amount of faith in the opinions of our peers, and a tremendous amount of trust in the systems that gather and organize those opinions. We spend most of our time wondering whether the manufacturer’s claims are fake, taking for granted that the data we base our decisions on is accurate. Apparently, that isn’t something we should take for granted.

I am a very social shopper. I love to research my possible purchases and I tend to weigh the user reviews more heavily than editor’s reviews when it comes to judging real-world usage. For any technology-related purchase, my first stop is always CNET. I have trusted their knowledge and expertise for over a decade. Yesterday, while looking into a Black Friday deal on a Samsung SyncMaster S23A350H monitor, I wondered why the editors gave it a full 3 stars, but the user reviews gave it a lowly 2 stars.

Something struck me as odd as I was looking over the reviews. They didn’t seem that bad, so I did the math. With only 5 reviews, it was easy math, but the result was a big surprise.

Samsung SyncMaster monitor user review scores

(1 x 4) + (2 x 3) + (1 x 2) + (1 x 1) = 13
13 / 5 = 2.6 stars

Based on the scores CNET lists, this monitor should have had a 2.5 star Average User Rating. On a 5-star scale, a half-star deficit is significant.

Before jumping to conclusions, I decided to spot-check a few more products. The news isn’t good. The poor Motorola Droid Bionic arguably suffers from an even more egregious shortage:

Motorola Droid Bionic user review scores

(60 x 5) + (18 x 4) + (13 x 3) + (9 x 2) + (7 * 1) = 436
436 / 107 = 4.07 stars

Any way you do the math, the Droid Bionic should be showing a full 4 stars, but the screenshot speaks for itself with 3.5 blue stars. My other two tests had similar shortages:

Sony Bravia user review scores

(2 x 5) + (1 x 4) + (0 x 3) + (0 x 2) + (2 * 1) = 16
16 / 5 = 3.2 stars

Nintendo 3DS user review scores

(16 x 5) + (10 x 4) + (5 x 3) + (0 x 2) + (0 * 1) = 135
135/31 = 4.35 stars (which should round up to 4.5)

Every one of the Average User Reviews I checked is mathematically flawed. Sure, you could argue “if they’re all too low by the same amount, then it’s still an even playing field”, but the simple fact is that the absolute values can have a major impact on buying decisions. Additionally, consumers may be comparing data from various sources in their research. A missing half-star, in the context of the overall user opinions, could be the difference between buying and bailing.

Hopefully CNET will address this issue quickly (and if I hear from them that they have, I will gladly update this post to announce it). For manufacturers, the lesson in all of this is that you have to monitor the feedback and public opinions that are posted for your products, not just for quality, but for quantity and any aggregate scores that are being published about them.

Share and Enjoy:
  • Twitter
  • Facebook
  • LinkedIn
  • Digg
  • StumbleUpon
  • del.icio.us
  • Yahoo! Buzz
  • Google Bookmarks
  • Technorati
This entry was posted in Advertising & Marketing, Technology and tagged , , , , . Bookmark the permalink.

2 Responses to When the stars don’t align: CNET user ratings don’t add up

  1. Steve Miller says:

    Perhaps their algorithm is a bit more complex. Consider professional boxing — have you ever noticed that a boxer NEVER receives less than 8 out of 10 from a judge? What happened to 0-7? My point is — perhaps when someone leaves a less than 5 star rating on CNET, the folks take notice because, they are prob real buyers of the product (and not merely company reps or distributors providing the easy 5 star rating).

    • If they want to publish a "Weighted User Average" based on some algorithm, that's their business, but in this case, they simply call it an "Average User Rating". There is a fairly universal expectation of how that would be calculated. What's more, is that it is being represented as the embodiment of the users' ratings. You can't really mess with that without violating the spirit of the user feedback.

      Oh, I get what you are saying about boxing, btw. I guess anyone who would deserve a lower score usually gets knocked out (or the fight is stopped by the ref) and it never comes to a decision, but I am surprised that there isn't more scoring variation.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>