A database for product reliability

One of the issues i have is that I have little idea how reliable certain brands actually are, product reviews tend to have an over representation or bad and good experiences with little of the average. And the choice reviews dont tell me about long term reliability. The data collection would work like this

  1. A person buys a product and registers it.
  2. Each year the person gets a survey asking something like.
    1. Do you still have product X
    2. Have you had to repair product X
    3. Does it have any issues.

This would improve the quality of the lookup.


Hi Danielle,

Thanks for your interesting suggestion. You may be aware that every year we conduct a large brand reliability and satisfaction survey with members, which attracts around 10,000 responses. We ask members about major items like fridges and TVs every year, and every other year we ask them about other items of interest such as coffee machines, juicers or robot vacuums.
This data is collated and analysed to come up with brand reliability and brand satisfaction scores. For statistical accuracy, we need at least 50 responses for each brand, so we can’t include all brands in our results, but all major ones are typically covered. We normally publish reliability scores, if we have them, in the appropriate product review pages. There may be a way to present this information differently – in a more centralised manner, for instance – so thank you again for your feedback.

Look out for our next reliability survey article in the August magazine.

Kim Gilmour
Content producer, household


I love the personal angle on this idea too. Maybe we could do something with warranties?

1 Like

Choice does provide reliability information if you search through the product reviews on line before you shop. But it might be that when one casually shops it would be handy to access the brand/product reliability by mobile on the spot. Brand/product search would be quicker.

1 Like

Firstly, I didn’t know the survey Kim discussed above existed so I think the results could be made more visible.

Second, I think there could be value in prompting Choice subscription holders for reviews on long term products like fridges years after the purchase to see how it’s held up in terms of durability and needing repairs. Having this detail at the model number rather than the brand level would be valuable, especially for users who purchase second hand items off of platforms like gumtree. Reviews are so often skewed to people who have very negative (or very positive) experiences so paying for reviews via reductions in the Choice subscription cost could reduce this bias.


Thanks Lisa. Yes, we’ve been conducting the survey for many years now. You can see a link to last year’s here:

We focus on the brand rather than the model because there are so many variations and we want to have enough final numbers to make the data as accurate as possible. It’s also very time-consuming for people to look for specific model numbers. It is a huge survey and we are already asking a lot from our members. But I do like the idea from @danielleighpark if enough people are willing to register their new purchases with us.


As part of the insights team at CHOICE (we help design surveys and other research) I find the ideas mentioned in this discussion really interesting!

The reliability survey @kim mentioned is providing important information on longterm reliability of different products (something we can’t test in our labs). We’ve been thinking about how to improve the current format, i.e.capturing product reliability in one (big) survey and I’m excited about @danielleighpark 's idea for several reasons:

It’d break down the survey in more palatable chunks. It’d also avoid having us to capture certain information (like when a product was purchased) each year.

It’d be an ongoing collection of reliability data across a potentially wider range of product categories.

It might even help with getting more model specific information - something @lisagrace7 mentioned - what we currently don’t capture in the survey.

Lots of potential! We’ll think more about this, and also look forward to hearing more about your ideas & suggestions. Thank you!


Hi All, My response here is mostly directed to Choice Staff.
As you have stated choice has been doing the reliability surveys for years & as someone who has taken a few over the last 10 years or so I should like to point out that while you do the surveys frequently the questions asked miss the point sometimes. After you ask did you have warranty problems with the product you don’t follow on with what they were in detail. I think you should be asking what problems did you have with the product because there are plenty of problems that don’t fall under the warranty umbrella that are major problems. Design & location of switches, buttons, grips, cord entries, cord returns are but a few that I can think of.
A big problem I often see is many things are not designed to be “age friendly” with tiny buttons, universal symbols, very small printing on labels on equipment & instruction sheets.
What I’m trying to say is there are plenty of products that meet the requirement of being reliable by not requiring warranty during there working life, but are the biggest lemons going because of other basic usage flaws.
I feel choice allows these products to slip through as good products because they meet warranty requirements.
My intention is not criticism, but to see a more accurate report as a result.
Regards Pegasus


Thanks for those important observations, Pegasus.

These days we not only ask members about reliability, but we ask about satisfaction, too. For example: steam mops are pretty reliable. On the whole they turn on and they work. But do they do what people expect them to? To some extent, no. They aren’t great on grout, for instance. So they get a lower satisfaction score.
We see this again and again with several other product categories.
In terms of the more accessible and “age friendly” features you speak of, that is where our independent product testing comes in. Our expert ease of use assessments make up a large proportion of most of our product tests. Read our reviews if you want to know about these aspects, and then look at reliability and satisfaction scores, to get the whole picture.

Kind regards,

Thank you for your response Kim. I was not thinking of the fact that satisfaction is different for each person dependent on their individual needs from that product & as long as a good cross section of young & old, technical & not, male & female etc are doing the surveys than the results will take my points into account.
Regards Pegasus.

Thank you for this @Pegasus - great food for thought.

I like your idea about looking at issues or problems with products (e.g. dish washers) holistically. Some mighty be “reliability” problems, i.e. something stops working after 6 months, others general design/ease of use/etc. problems that exist from day one.

It’s tricky to package it up in one survey and keep it short (which is what we aim for…even though as Kim mentioned the reliability survey tends to end up BIG).

We’ve been tossing the idea around to have the reliability survey specific to one category only - in which case we could also delve deeper into problems generally. And I agree if we then ensure we get a good cross section of people using a certain product that should shine a broad light on what’s working and what isn’t with a product.

Thanks again for your thoughts on this - and please continue to add suggestions for surveys / questions - we’re always on the lookout!



Hi Christina, Why not do a survey on what products Choice members own in a wide range of areas & than find similar or preferably the same products & do a long term survey using the Choice member results on reliability?
You could contact this group every 3 months with a well thought out tick & flick survey on these targeted products & you would have a very accurate & “real life use” survey. I have yet to see a survey that can give you real time info on paint condition or U V condition of plastics on products or repetitive use tasks after a set time period. Food for thought?
Regards Pegasus.

1 Like

YES!! A group of people who regularly feed back on their real-life, real-time in-home products.

Ok, maybe there are a few too many hyphenated words in this :wink:

But the idea is fabulous. Definitely something to think about further!

Thank you @Pegasus


Slightly on the side of this topic, on the subject of reliability testing by choice, I reckon there is a bit more that Choice could do. When I was in manufacturing industry we did testing of our outgoing product for durability using accelerated methods, designed to emulate final use condition and provide some correlation to actual performance. For example the wear resistance of a surface coating can be assessed, using machines to abrade the surface under controlled conditions of mechanical action and load. The tests only take a few minutes to complete but of course need to be repeated to be significant. Of course it can be harder to test the surface coatings of a manufactured product, than a component or raw material.
Similar information can be obtained quite quickly for characteristics like flexibility, tensile strength, fastness properties to light and another environmental conditions, etc. Would it be meaningful if an appliance or tool were made to operate intermittently but continuously over a period of hours, days or weeks, depending on time available.
The furniture industry conducts machine testing on some items to simulate the repeated load and unload of a chair for example, or the repeated opening and closing of a hinged item. These may take too long for Choice to employ however.

I’m not in the testing side of CHOICE but I’m often in and around the labs and we do have a number of rigs for just this sort of thing – often custom made by our lab staff. I’ll invite some of them into this conversation to see if they can shed some light on this aspect of our testing.

Hi Kim,
I think you’ve might have missed the point of the excellent suggestion from Danielle.

Reliability isn’t about brand alone, its about the “product” and should also factor in some element of support / recovery. For example, we all know Samsung makes reliable solid state drives as they are good at hardware. That said they have no idea how to make software (particularly in terms of maintenance and testing) so any product they make that is a combination of complicated software and hardware tends to be unreliable, unusable or extremely limited in shelf life. By rating the brand only what are you really scoring? Also we also know… Samsung as an example that version 1 of the device is often built better than version 2. e.g. Samsung Note 1 vs Note 2 (which had a poorer screen). Quality often drops as they reduce costs and try to leverage the hype and success of the first product.

As consumers we need to know about version 2 and version 3 not just version 1 and not just about the overall company. Also factor in how quickly failure is recovered from. e.g. Samsung released a firmware that ruined many of their high end LED TV’s (the plasma logic was accidentally introduced into LED firmware) but took over 12 months to get to a point of recovery which actually resulted in refunds by consumer affairs for some (e.g. me). How quickly they provide a solution to problems should factor into the “reliability” score particularly given the dependency on “software / firmware and regular software updates” that many new products now have.

The choice reliability survey is useful but of very limited use without proper context of the product.

The database idea is an excellent idea! though a lot of work to maintain. I was tempted to set something like this up myself but it would require a lot of effort to keep current. Setting up the data structure and website is the easy bit.



Thanks Paul.

Yes, the database is a good option to pursue. We’ll look into how we can get something off the ground.




We do have many rigs, as @viveka says, that we use to test durability. We are limited in what we can test due to time constraints but here are some examples:

  • Our suitcase test involves drop-testing the suitcase on to a hard surface hundreds of times, from a height of 90cm. We also built a custom rain rig to test water resistance. We have scratch and puncture tests available too.
  • Our lightbulb test is ongoing to test for performance and longevity.
  • Our strollers are tested to the Australian standard, which involves putting them on the rolling rig for 64 hours, and testing for stability.
  • Electric blanket cords undergo a flexure test where the we simulate the cord flexing thousdand of times while the cord is pulled by a weight.
  • Our solar panel test is taking a year, in conjunction with CSIRO.
  • We have labs that are accredited to test the durability of toys.
  • We are currently testing kitchen benches, which will also involve some destructive testing.

There is a lot more we could do - but some problems only manifest themselves after many, many months and we usually don’t have time to wait a year, especially in fast-moving markets. That’s why the database idea is good as it will help narrow down the reliability by model. Our kettle test identified a good performer, yet many people have subsequently left a poor review of it, as the kettle tended to underperform after a period of about a year. To help address this, one of our team has taken it home and will see if they run into any issues.



Hello there,

It looks like I’ve been teaching my grandmother to suck eggs. Sorry to doubt you.

However, I’m not aware of extensive durability Data in your reports. I recall some appliances failing e.g. A cord Anchorage test, but I have never been aware of your ability Data in your reports. Clearly I have not been looking properly.

Thanks for the reply and the details.

Since I joined CHOICE I’ve been amazed to see the depth and rigour that our testing labs go to. I mean, I knew that CHOICE independently tests products but it’s so much deeper than I expected, down to the special regulation dirt imported from Germany used to test vacuum cleaners.

I think there’s a lot more we can do in the way we present our results to communicate both what we’ve learnt through testing and how we know it. And I know that we have projects in the works to do just that. So yes, we hear you and we won’t be resting on our laurels.