Firstly, I didn’t know the survey Kim discussed above existed so I think the results could be made more visible.
Second, I think there could be value in prompting Choice subscription holders for reviews on long term products like fridges years after the purchase to see how it’s held up in terms of durability and needing repairs. Having this detail at the model number rather than the brand level would be valuable, especially for users who purchase second hand items off of platforms like gumtree. Reviews are so often skewed to people who have very negative (or very positive) experiences so paying for reviews via reductions in the Choice subscription cost could reduce this bias.
Thanks Lisa. Yes, we’ve been conducting the survey for many years now. You can see a link to last year’s here:
We focus on the brand rather than the model because there are so many variations and we want to have enough final numbers to make the data as accurate as possible. It’s also very time-consuming for people to look for specific model numbers. It is a huge survey and we are already asking a lot from our members. But I do like the idea from @danielleighpark if enough people are willing to register their new purchases with us.
As part of the insights team at CHOICE (we help design surveys and other research) I find the ideas mentioned in this discussion really interesting!
The reliability survey @kim mentioned is providing important information on longterm reliability of different products (something we can’t test in our labs). We’ve been thinking about how to improve the current format, i.e.capturing product reliability in one (big) survey and I’m excited about @danielleighpark 's idea for several reasons:
It’d break down the survey in more palatable chunks. It’d also avoid having us to capture certain information (like when a product was purchased) each year.
It’d be an ongoing collection of reliability data across a potentially wider range of product categories.
It might even help with getting more model specific information - something @lisagrace7 mentioned - what we currently don’t capture in the survey.
Lots of potential! We’ll think more about this, and also look forward to hearing more about your ideas & suggestions. Thank you!
Hi All, My response here is mostly directed to Choice Staff.
As you have stated choice has been doing the reliability surveys for years & as someone who has taken a few over the last 10 years or so I should like to point out that while you do the surveys frequently the questions asked miss the point sometimes. After you ask did you have warranty problems with the product you don’t follow on with what they were in detail. I think you should be asking what problems did you have with the product because there are plenty of problems that don’t fall under the warranty umbrella that are major problems. Design & location of switches, buttons, grips, cord entries, cord returns are but a few that I can think of.
A big problem I often see is many things are not designed to be “age friendly” with tiny buttons, universal symbols, very small printing on labels on equipment & instruction sheets.
What I’m trying to say is there are plenty of products that meet the requirement of being reliable by not requiring warranty during there working life, but are the biggest lemons going because of other basic usage flaws.
I feel choice allows these products to slip through as good products because they meet warranty requirements.
My intention is not criticism, but to see a more accurate report as a result.
Regards Pegasus
These days we not only ask members about reliability, but we ask about satisfaction, too. For example: steam mops are pretty reliable. On the whole they turn on and they work. But do they do what people expect them to? To some extent, no. They aren’t great on grout, for instance. So they get a lower satisfaction score.
We see this again and again with several other product categories.
In terms of the more accessible and “age friendly” features you speak of, that is where our independent product testing comes in. Our expert ease of use assessments make up a large proportion of most of our product tests. Read our reviews if you want to know about these aspects, and then look at reliability and satisfaction scores, to get the whole picture.
Thank you for your response Kim. I was not thinking of the fact that satisfaction is different for each person dependent on their individual needs from that product & as long as a good cross section of young & old, technical & not, male & female etc are doing the surveys than the results will take my points into account.
Regards Pegasus.
Thank you for this @Pegasus - great food for thought.
I like your idea about looking at issues or problems with products (e.g. dish washers) holistically. Some mighty be “reliability” problems, i.e. something stops working after 6 months, others general design/ease of use/etc. problems that exist from day one.
It’s tricky to package it up in one survey and keep it short (which is what we aim for…even though as Kim mentioned the reliability survey tends to end up BIG).
We’ve been tossing the idea around to have the reliability survey specific to one category only - in which case we could also delve deeper into problems generally. And I agree if we then ensure we get a good cross section of people using a certain product that should shine a broad light on what’s working and what isn’t with a product.
Thanks again for your thoughts on this - and please continue to add suggestions for surveys / questions - we’re always on the lookout!
Hi Christina, Why not do a survey on what products Choice members own in a wide range of areas & than find similar or preferably the same products & do a long term survey using the Choice member results on reliability?
You could contact this group every 3 months with a well thought out tick & flick survey on these targeted products & you would have a very accurate & “real life use” survey. I have yet to see a survey that can give you real time info on paint condition or U V condition of plastics on products or repetitive use tasks after a set time period. Food for thought?
Regards Pegasus.
Slightly on the side of this topic, on the subject of reliability testing by choice, I reckon there is a bit more that Choice could do. When I was in manufacturing industry we did testing of our outgoing product for durability using accelerated methods, designed to emulate final use condition and provide some correlation to actual performance. For example the wear resistance of a surface coating can be assessed, using machines to abrade the surface under controlled conditions of mechanical action and load. The tests only take a few minutes to complete but of course need to be repeated to be significant. Of course it can be harder to test the surface coatings of a manufactured product, than a component or raw material.
Similar information can be obtained quite quickly for characteristics like flexibility, tensile strength, fastness properties to light and another environmental conditions, etc. Would it be meaningful if an appliance or tool were made to operate intermittently but continuously over a period of hours, days or weeks, depending on time available.
The furniture industry conducts machine testing on some items to simulate the repeated load and unload of a chair for example, or the repeated opening and closing of a hinged item. These may take too long for Choice to employ however.
I’m not in the testing side of CHOICE but I’m often in and around the labs and we do have a number of rigs for just this sort of thing – often custom made by our lab staff. I’ll invite some of them into this conversation to see if they can shed some light on this aspect of our testing.
Hi Kim,
I think you’ve might have missed the point of the excellent suggestion from Danielle.
Reliability isn’t about brand alone, its about the “product” and should also factor in some element of support / recovery. For example, we all know Samsung makes reliable solid state drives as they are good at hardware. That said they have no idea how to make software (particularly in terms of maintenance and testing) so any product they make that is a combination of complicated software and hardware tends to be unreliable, unusable or extremely limited in shelf life. By rating the brand only what are you really scoring? Also we also know… Samsung as an example that version 1 of the device is often built better than version 2. e.g. Samsung Note 1 vs Note 2 (which had a poorer screen). Quality often drops as they reduce costs and try to leverage the hype and success of the first product.
As consumers we need to know about version 2 and version 3 not just version 1 and not just about the overall company. Also factor in how quickly failure is recovered from. e.g. Samsung released a firmware that ruined many of their high end LED TV’s (the plasma logic was accidentally introduced into LED firmware) but took over 12 months to get to a point of recovery which actually resulted in refunds by consumer affairs for some (e.g. me). How quickly they provide a solution to problems should factor into the “reliability” score particularly given the dependency on “software / firmware and regular software updates” that many new products now have.
The choice reliability survey is useful but of very limited use without proper context of the product.
The database idea is an excellent idea! though a lot of work to maintain. I was tempted to set something like this up myself but it would require a lot of effort to keep current. Setting up the data structure and website is the easy bit.
We do have many rigs, as @viveka says, that we use to test durability. We are limited in what we can test due to time constraints but here are some examples:
Our suitcase test involves drop-testing the suitcase on to a hard surface hundreds of times, from a height of 90cm. We also built a custom rain rig to test water resistance. We have scratch and puncture tests available too.
Our lightbulb test is ongoing to test for performance and longevity.
Our strollers are tested to the Australian standard, which involves putting them on the rolling rig for 64 hours, and testing for stability.
Electric blanket cords undergo a flexure test where the we simulate the cord flexing thousdand of times while the cord is pulled by a weight.
Our solar panel test is taking a year, in conjunction with CSIRO.
We have labs that are accredited to test the durability of toys.
We are currently testing kitchen benches, which will also involve some destructive testing.
There is a lot more we could do - but some problems only manifest themselves after many, many months and we usually don’t have time to wait a year, especially in fast-moving markets. That’s why the database idea is good as it will help narrow down the reliability by model. Our kettle test identified a good performer, yet many people have subsequently left a poor review of it, as the kettle tended to underperform after a period of about a year. To help address this, one of our team has taken it home and will see if they run into any issues.
It looks like I’ve been teaching my grandmother to suck eggs. Sorry to doubt you.
However, I’m not aware of extensive durability Data in your reports. I recall some appliances failing e.g. A cord Anchorage test, but I have never been aware of your ability Data in your reports. Clearly I have not been looking properly.
Since I joined CHOICE I’ve been amazed to see the depth and rigour that our testing labs go to. I mean, I knew that CHOICE independently tests products but it’s so much deeper than I expected, down to the special regulation dirt imported from Germany used to test vacuum cleaners.
I think there’s a lot more we can do in the way we present our results to communicate both what we’ve learnt through testing and how we know it. And I know that we have projects in the works to do just that. So yes, we hear you and we won’t be resting on our laurels.
I think any question posed under “warranty” as compared to “failure” misses the mark. All failures are not warranty issues but most warranty issues are failures, time dependent and subject to the ACL.
Another aspect, but how can it be quantified? An increasing number of manufacturers only stock spares for the warranty period at product end-of-life. Since there are still some sales for a time after that, if a failure cannot be fixed because of insufficient spares they have the option of replacing with a “like product.” So no part is not a worry for them.
For us, eg our Asko washer, designed for 20-years service as the advertising goes, looked and worked as new but had the control board fail at 12 years. As with anything electronic in the modern world, the module was designed to be replaced not repaired. But no part available - out of production and out of stock. Asko suggested we try 3rd party parts suppliers and used machine vendors to scavenge the part.
We did not replace it with another “quality” Asko! Why pay top dollar for a product that might last 20-years, or only 12, when one can buy a product that is rated very highly for what it does that might last 12 years or maybe longer if lucky, or perhaps just 6, but for half the price?
I had the same problem with a top of the line Bosch dishwasher, only mine lasted 5 (!) years - just out of warranty - before the power board failed. Even taking the dishwasher in for repairs, it was going to cost me almost as much as a new LG, the brand suggested for reliability by every repair person I spoke with. My point is, until the power board failed, I would have rated this machine with high marks in every category. I agree that reliability has to cover many years in order to be useful.
In the last 12 months I completed a Choice survey. It asked all the relevant questions on all appliances we currently have in our home. I wasn’t sure how these survey responses were used until today, but I think a section on member reviews and product reliability could be consolidated on the Choice website as a separate menu item. Maybe in a spreadsheet type format?
Our Council has a “bulky goods” collection twice a year. All sorts of stuff gets put onto the nature strip for collection. One item I have become aware of that appears with great regularity is Barbecues. There seems to be a great variety of brands and models. I was wondering if Choice did some ‘reverse’ product testing of tossed barbecues to see if there were obvious common faults. Inspection of tossed barbecues is one way but also a quick Consumer survey where there is a barbecue on the nature strip asking why it was thrown away. For example, Rust seems very common, gas controllers not working is another. If Choice could find out what causes barbecues to be thrown out and then take that into account when testing the next new lot it may provide some interesting ideas on what to check.