Subscription fees for unwanted services

Many thanks for the feedback @NubglummerySnr and the @PhilT. I’ve moved this from the printer ink thread as I was hoping to expand on the discussion, but you can feel free to disregard if you wish.

I’d like to explore what can CHOICE can do to improve our perceived value. Even if a subscription is deemed unworthy for an individual at a particular time, I’d like to address some of the concerns raised here to find out if it’s a matter of perception, or if there’s something we can do to change things for the better. Much of the work at CHOICE has been improved and refined over the years thanks to the honest criticism we receive from our supporters, so we mean it when I say thanks for speaking up :slight_smile:

CHOICE is largely funded by members via our subscription costs. So, while a particular product test may not be perceived as valuable, the reality is subscriptions fund all of our activity from our campaigning right through to this forum. We have enormous admiration for all our subscribers because, from the inside of the organisation, we can see that their support often results in better conditions for all Australian consumers in many different areas, including exposing dodgy business behaviour through the media to meeting with politicians to improve regulation. We are grateful for the opportunity, and we come in each day with that in mind to fight as hard as we can to deliver unbiased information and fairer conditions for consumers.

The challenge is it’s often hard for us to connect those dots, as an outcome may take years. There is not always a direct correlation between a subscription for a product review and a big consumer win, but the subscription most certainly impacts our work in a very important way. So, the first question I have is, how can we improve this perception and better encourage support for this broad-level work? Is it a case of better communication, better options to fundraise? This is an action we’re looking to implement via this forum too.

With that said, we don’t expect consumers to subscribe to our content simply to support our altruistic goals. We intend those product tests to provide information that will save consumers both time and real dollars. In the case of the printer test (and other articles), it’s true that we prepare our message for a non-technical audience. We want to make it easy for people to be able to point out and identify ‘that’s the best one’, without needing to understand the technical analysis behind the recommendation.

However, also with the printer article, we’ve tested 119 printers in a side-by-side comparison. We published 56 touch points across the models tested, but there are also more considerations that we condense into good points and bad points. For example, whether a model can print A3 is included in this section, as is duplex and so on. The concern here is that the scale and depth of analysis has clearly been misunderstood if it is being considered alongside the majority of free reviews online, even before we consider any advertiser relationships. Of course, there are also crowd-sourced reviews, and these certainly have value, but in product testing terms they do not necessarily meet the same standard that we aim to achieve.

I’m therefore wondering whether the more in-depth comparison tables are well-known or understood? Perhaps this is something we can look at in terms of our site design, or maybe we can highlight this more clearly in the text?

These points are intended to be explanatory and not to rebuff the very valid concerns raised. Perhaps I’ve also missed the point of frustration too, so all comments are welcome. As mentioned, we really appreciate the feedback and if there are ideas on ways CHOICE can improve in value terms, we’d love to hear it!