We here have been writing and editing tech product reviews for going on three decades, so we claim some significant expertise about them. But Fixya‘s showing an interesting approach to the process we haven’t seen before: collecting thousands of troubleshooting reports and drawing conclusions about the products.
Fixya aggregated about 6,000 user reports about five smart watches: the Pebble, Samsung’s original Galaxy Gear, the Sony SW2, the Martian Passport, and the I’m Watch. Like the rest of the site, all focus on problems that users have detailed. Speakers that don’t work right, Bluetooth that fails, too-short battery life, a feature that people were expecting but isn’t there — Fixya collects the complaints, breaks them down by category, and reports them out. It’s interesting.
There’s some real merit to this. Reviewers don’t typically have much time with products — sometimes as few as a couple of hours, or less — and are almost by definition limited to one product by one (hopefully experienced) writer. A lot of eyes on a product for an extended period of time is a valuable addition.
But Fixya’s methodology is focused entirely on the negative: what’s not working right, or up to expectations. There’s value in that, especially across a wide range of respondents. But it’s one thing to criticize the Pebble for control buttons that don’t work well (which is a product execution problem) and another to ding it for not featuring voice control (which may or may not be a missing feature). When all you’re getting is complaints, how can you say that a product is good? How do you rank them against each other?
The other problem: the long lead time required to collect enough reviews to be meaningful can lead to reviews of products that aren’t available anymore, like the original Samsung Gear.
So Fixya’s reports are good data points about these products. But they’re not really product reviews, because they’re not the whole story.