Secrecy, privacy, security, intrusion

Hmmm. 148 pages. Can you give us a summary? (assuming that you have read it)

1 Like

Briefly, there is too much concentration of power within a few powerful organisations and chief among those named were Google and Facebook. Competition regulators (CRs) are somewhat concerned with how to deal with the issue. CSOs (civil society organisations eg CHOICE) in large part are very concerned by the largely uncontrolled use of personal data and many do not see that responses have been adequate or that they are to influence much change (though CRs think they listen well to CSOs).

"A 2019 study by PI revealed how popular mental health websites in France, Germany and the UK share users’ data with advertisers, data brokers and large tech companies, including Google, while some ‘depression tests’ on these websites leak answers and test results to third parties. This research also shows the dominance of Google in this tracking ecosystem. On the webpages PI analysed Google was the most prevalent third-party tracker and Google’s advertising services DoubleClick and AdSense were used by most of these webpages. 70.39% of the webpages used DoubleClick. Other Google products such as Google Analytics, Google Tag Manager and Google Fonts are also widely used. 87.8% of webpages in France had a Google tracker, 84.09% in Germany and 92.16% in the UK.

In December 2018, PI revealed how Facebook routinely tracks users, non-users and logged-out users outside its platform through Facebook Business Tools. PI’s investigation found that at least 61% of the Android apps tested automatically transfer data to Facebook the moment a user opens the app. This was found to have occurred regardless of whether people have a Facebook account or not, or whether they are logged into Facebook or not. Furthermore, PI’s investigation found that some apps routinely send Facebook data that is incredibly detailed and sometimes sensitive. Again, this concerns personal data of people who are either logged out of Facebook or who do not have a Facebook account"

CSOs believe there is some hope for CSOs in helping point out issues to competition regulators (eg ACCC), but they have little to no power to effect changes.

" While a substantial proportion of CSOs were disappointed by the results of their engagement with regulators, the majority of those that had interacted with them found that their interactions were somewhat impactful on regulators’ decisions"

“CSO respondents appeared less optimistic about their future interactions with competition regulators. In particular, a few CSOs questioned how receptive regulators were to arguments about impact on human rights or social impact beyond market impact.”

"This was a recurring concern, whereby CSOs are stuck between a rock and a hard place: on the one hand, they often lack the legal standing to make formal interventions or complaints to regulators, or to challenge regulators’ decisions before courts; and on the other, even if they did, they would lack the resources to be able to defend their cases - as the costs consequences can be dire "

On the other hand, some CSOs were optimistic about their ability to build competition regulators’ awareness and capacity, for example through providing policy recommendations, involving regulators in studies, and inviting them to workshops
or presentations"

Monetisation of personal data increases the risk that collectors of this data will seek even more data and analyses of this data, while a user of a service who has collected this personal data) will find it harder to move out of this data collection. The non transparency of the data collectors impacts the ability of any investigation to draw robust conclusions and provide supported evidence of any problems in the systems.

Some CRs have not yet grasped the issue of the problems that this personal data collection (and the associated corporate data exploitation) has on competition, targeted advertising, and interference in a person’s life.

Recommendations from the results of the survey

"While each context is different, the following recommendations seem to be supported by
our findings:

• Competition regulators should develop strategies to deal with digital markets and the role of personal data, in consultation with CSOs;

• Competition regulators should proactively engage with CSOs and invite them to provide their views and expertise as part of policy consultations, market studies and merger reviews or investigations;

• Competition regulators should develop in-house expertise, including technical and legal expertise, to assess privacy parameters in competition investigations and work closely with their data protection counterparts;

• Competition regulators should support legislative reforms aimed at addressing the privacy challenges in digital markets and support initiatives that seek to ensure CSOs have legal standing and are able to bring or intervene in proceedings before them;

• CSOs should collaborate and share expertise to support their capacity to engage with competition regulators;

• CSOs should raise awareness of how competition can and should address data concentration concerns and seek opportunities for advocacy, including legislative reform and the granting of legal standing before competition authorities;

• Funders should further invest in the capacity of CSOs to work at the intersection of data protection and competition."


That’s all good as far as it goes. I don’t really see this as a competition issue though. It would still be a problem if there were 100 Googles each 1/100th of the size of the existing Google. (It could in some respects be a worse problem for the end user with 100 Googles.)

The sheer size of Google and Facebook may be a problem for a government attempting to regulate something bigger than the government is but I don’t exactly see government attempting to regulate those companies anyway as far as privacy goes.

Also government is being part of the problem rather than being part of the solution. They benefit from all this data collection and it could be hard to wean government off that.


Couldn’t the way personal data is used, that that data be considered a commodity? The way it is traded, it is a commodity in my opinion and I’m sure in the eyes of many others.

1 Like

That’s certainly another way to look at it. But then the market would be adjusting what price is paid for your data, still without giving you the option to say “no”.

Apple security flaw could allow hackers to control people’s iPhones. Experts says users should update their software now

Apple security flaw could allow hackers to control people's iPhones. Experts says users should update their software now - ABC News

If you have an iPhone at or later than a 6S then there is an update available. You should apply the update ASAP as apparently this is “system-busting”.

Who knows what the situation is if you have an iPhone earlier than that!

If you are lucky, this is a vulnerability that was introduced by Apple after the iPhone 6, so you are safe (from this vulnerability).

If you are un‍lucky, then Apple has just hung you out to dry.

TikTok’s in-app browser can monitor your keystrokes, including passwords and credit cards, researcher says

TikTok's in-app browser can monitor your keystrokes, including passwords and credit cards, researcher says - ABC News

I try to avoid all apps but this one is apparently the worst (known).

Social media apps are worse than using the same social media site via a web browser.


The Victorian Government advised that they supplied de-identified contact tracing data to ACIC to be used with Palantir software during the COVID outbreaks to test the ability to try and identify the mystery sources of outbreaks.

If it had been able try to trace the source that means, it seems to me, that people could have been identified by the data.

An article about the sharing can be read at


Eek. I do not want my personal data anywhere near Peter Thiel and his data-swallowing giant.


Data supplied was used with the software…it just wasn’t a very successful outcome in regard to COVID tracing results.

1 Like

META are being taken to the UK Courts over their data collection and using the information garnered for sending ads to users.

In the UK they have the protection of the GDPR laws, sadly missing here. If the suit is successful then this may have a huge impact on the operations of the Company in the UK and perhaps more widely in the EU as well.


Probably not for much longer; the UK wants to ‘break the shackles’ of the EU.


They may find the EU even more “hostile” when it comes to trading, movement, and even getting access to networks. Particularly this could occur where UK laws are weaker than EU requirements and allow EU citizens data to escape GDPR protections when used/held/requested in the UK.


Would need careful checking of the details. If GDPR has been enshrined in UK law then the long-ago departure from the EU does not remove those protections unless the legislation specifically anticipated that. If GDPR has not been enshrined in UK law then it would depend on the transitional arrangements regarding Brexit.

(assuming “They” means “Facebook”) Yes. I would look first to the EU. If Facebook wins the “same” case in the EU then Facebook is unlikely to lose in the UK.

Just when you thought it was safe to enter the internet :blush:

A UK Govt department is going to scan every internet connected device in the UK ( so they say) for security vulnerabilities they say.

From the blog

“ Basic technical details about our capability (including details on what we collect and store, as well as how to opt-out) are available on our scanning information page. Remember, if you do opt-out, we’ll be limited in how we can help you understand your cyber security exposure.”

1 Like

Japan planned a one-pass version of this before hosting the Olympics.

Not sure of the end result, but there was a really good reason for doing it.

Almost every system that is connected to the Internet has some flaw or another, given the complexity of at least hundreds of thousands of lines of code and/or possible interactions. (I am not saying every system has a flaw, because there is some room for uncertainty.) When those flaws are discovered, one of several things happens:

  1. The discoverer is the developer, or notifies the developer who (hopefully) fixes the flaw and distributes a patch. We have bounty programs now to encourage this.
  2. The discoverer sells the flaw to bad people to weaponise (or weaponises it themselves).
  3. The discoverer just ignores or works around the flaw, leaving it unfixed but also unexploited.
  4. The discoverer publishes details of the flaw on social media so it is available to anyone who cares.

One of these has a potentially positive outcome, of getting the flaw patched. Okay, so the developer patch the flaw. Then we get to the next list of possibilities:

  1. The patch is automatically deployed to all affected devices/software instances. Apple can do this (mostly), because it keeps control of iOS. Of course, even with one of the world’s largest companies getting the patch to every phone takes a few days - while attackers have an opportunity to figure out what is being patched and attack anything not yet patched. There will also be countries (China) and entities (government and private) that refuse to auto-patch until they have studied what is being changed and figure out the change’s impact on them. (On second thoughts, game developers are the best at getting patches applied - if you don’t patch you don’t get to play.)
  2. A patch is made available and users are notified of it by the software. I think most web browsers now tell the user to reopen the browser so a patch can be applied. Again, you can choose not to apply such patches automatically.
  3. A patch is made available on the developer’s website and registered users are notified by email.
  4. A patch is made available on the developer’s website for users to discover.

Of course, every time a patch is deployed for commonly used software there will be people reverse-engineering it to find what it fixed and take advantage of any non-patched instances. This is one of the reasons why operating systems and browsers now pretty much force patches down our throats. And in options 2-4 above the user can choose not to deploy the patch.

And then there are tools like Shodan, telling anyone who cares that of 200,000 instances of device x it has found online 150,000 are running version 1.3 and not the patched version 1.4.

So the UK government actively scanning the UK Internet space for problems makes sense. Others are already doing it, you are simply looking for known problems within a set user space and hopefully notifying people (I could imagine a registration system similar to that used by, not the UK’s current Early Warning Service) to update their software/hardware.

If it goes beyond that there is a problem, but otherwise I would like Australia to implement something similar - in an agency far removed from intelligence and law enforcement responsibilities. In fact, it may make more sense for an international entity to take on the responsibility for proactive scanning and advice to Internet users, though this would require international agreement followed by national laws. Maybe it could be added to the UNHRC’s remit?


This is showing how open/lax we are about security it would seem:

1 Like