Human rights and technology - Submission to the HRC

We’ve responded to to the Australian Human Rights Commission Issues Paper on ‘Human Rights and Technology’, which outlines how technology can be used by businesses to unfairly discriminate and harm people.

Now, we’re calling on the AHRC to investigate existing and emerging issues across insurance, finance, renting, energy and data, and outline the need for new, improved legislation. We’d love to hear your thoughts on our submission (PDF), including on the following key areas:

  • Access to technology
  • Issues with data and discrimination
  • Finance and General Insurance
  • Health and genetic data
  • Renting technologies
  • Holiday rentals
  • Consumer Data Right

We’d love to hear your thoughts on any of the topic areas above, on our submission (do you agree, disagree?) and any other ideas you have for addressing these important issues moving forward.

3 Likes

Maybe some of our @Defender-Black or @Digital-Rights group can offer some opinions?

2 Likes

Hi @BrendanMays

I have printed it out but haven’t had a chance to go through it. :roll_eyes: I will respond ASAP. :stopwatch:

3 Likes

Thanks @meltam. I’m sure @LindaPrzhedetsky will really appreciate your input :+1:

3 Likes

To be realistic, I would need to spend far more time examining both the Choice submission and the original discussion paper, to make anywhere near an informed and intelligent comment. All I can offer is a small perspective based on the general points identified above.

My concern with technology is simply the rate of change and the ability of various sectors to absorb, adapt and adopt new technologies. In an ever changing environment it is essential that we maintain parallel pathways to a given end, but increasingly we are seeing original pathways abandoned in favour of new computerised ones.
The flexibility inherent in dealing directly with a human being is being abandoned in favour of the more efficient yet clinical approach of the computer or web interface. Not everyone can make that transition, and therefore it is essential that those individuals can continue to access whatever they need without an expectation that they have that capacity to adapt.

I include here the elderly, those who lack capacity for whatever reason and those who may struggle with language. It often strikes me that various processes I might struggle with personally must be incomprehensible to those of lesser ability.

I am also aware of various instances where on-line processes are so formulaic that they allow insufficient latitude for situations which fall outside the prescribed frame of reference. For example, when applying for travel insurance, an automated system may ask whether your medication has changed. However, it assumes that any change is an increase and makes no allowance for a reduced level of medication. “Change” in this instance is deemed to be negative by default, and as the individual human contact has been removed, there is not opportunity to discuss, explain or even document circumstances which may be relevant to the eventual computer based decision.

There are many situations where on-line processes lack that flexibility to cater to non routine circumstances, and this is both discriminatory and a retrograde step in the interaction between individuals and their world.

So my basic point here is that technology is a wonderful thing, but in the process of streamlining our relationships with the corporate world, we should ensure that pathways remain open for those whose capacity for change is more limited. We also need to ensure that nothing is lost in the transition.

9 Likes

For me one of the main ones is how consent changes when the service does.

Most major services make you re-accept the terms and conditions when they change, but there is still no legal requirement to seek consent when the way a company or agency handles data changes (as far as I know). This should be a bare minimum to ensure people don’t consent to something, and then have that presumed as consent for something more invasive.

This extends to the government too. Take My Health Record. If the government decided to open up MHR access to insurers, people should have to agree to the new system before their data can be handed over. They should have an easy option to delete their information if they don’t want to.

6 Likes

Linda,

23 pages in the Choice submission

68? pages in the Issues discussion paper

10 discussion topic questions all with sub sections in the paper and a 2nd Oct 18 due date for submissions.

It’s impossible to respond to the whole. No doubt Choice has narrowed its response for that reason too?

Choice has done a good job of proposing connections between consumer outcomes and technology change. Expressing these in terms of rights is always difficult given our constitution and legal system.

The themes of fairness, equity, no disadvantage, equal access are repeated by Choice in the submission. Perhaps Choice might like to distill what these fundamental outcomes should be and emphasise the need for all consumer outcomes to be measured or trialled against these outcomes (principles, rights).

It is perhaps easier to agree that a system or machine can do no harm, than to define how this might work in reality. In extreme views of the future:

An automated vehicle might assess that it is impossible to detect and prevent all accidents. However if may decide that if it never exceeds 10 kph it can achieve the goal?

A credit assessment software system guided by AI might decide for the better health of the applicant who has no income the loan should be interest free?

An MRI service may decide to prioritise a higher risk patient over a lower risk, or skip a probable terminal patient for those with better prospects?

Should a machine or system on development displace employment, or should the new technology come with a requirement to fully offset all impacts?

There is a risk here that we may legislate to predetermine outcomes in these and many similar scenarios, excusing poor decisions and indemnifying the developers and owners. Choice has in principle asked similar questions in the submission with specific examples or areas of concern.

The risk is perhaps not the Technology but what we ask of it and whether the designers and owners are able to be held directly accountable.

Drawing on how we require a rigorous process of review and approval of projects environmentally.
It may also be wise to ask for a similar process of independent review and approval to consider each new technology product directly and specifically in a more public way, be it systems based or physical change?

As well as a no harm test, a shared benefit that improves the future of all might assist I deciding if a new technology should be accepted?

5 Likes

I agree. Another example could be doing your tax online. That makes it probable that I will continue to lodge on paper for as long as that is an option.

However all such instances are fixable. There is no intrinsic reason why an online process cannot be extended to include scenarios that are currently uncatered for - provided that there is some mechanism to bring the problem to the attention of the operator of the online process - and provided that the operator is then prepared to do something about it.

It is doubtful that legislation should be created in order to compel an operator to do something about it.

4 Likes

I think there is a fundamental problem here. The Australian government lacks the moral authority to fix this because it is part of the problem.

3 Likes

From Choice’s submission:

In Australia, those over the age of 65 are the heaviest users of our healthcare system. Many
points of access to the Australian healthcare system are now digital, from booking an
appointment online, to claiming a Medicare rebate, to accessing My Health Record. People over
the age of 65 are Australia’s least digitally included age group, and they are being left further
and further behind. That means older Australians, who are the heaviest users of healthcare
and have the most to gain from accessing online tools that could simplify their journey through
the healthcare system, are the the [sic] least likely to reap these benefits.

(are you interested in any other proofreading issues? or has the submission already been made?)

Without doubting the suggestion being implied, I think that that ship has well and truly sailed. The cost and efficiency advantages of digital / online access are so compelling that it seems very unlikely to me that we will ever go back - or that we can in any meaningful way hold back the tide of digital services replacing non-digital services.

I think therefore we need to focus on how we can make that digital / online process more accessible to the prospective customers.

I would add by the way a mention of the aged care system. It is similarly opaque, provides much of its information online, and yet its customer base is uniformly old.

Perhaps the answer here is two-fold.

  • An AI Digital Assistant that you can trust (i.e. throw away all the existing ones, which introduce as many Human Rights / Technology issues as they solve) and which is accessible.
  • Eventually the customer base will be digital natives i.e. this problem will largely solve itself with the passage of time.

PS No person ever should have to claim a Medicare rebate. That is just annoying politics in operation. However that is a point solution to a specific issue, not the general issue.

PPS You already know what I think about My Health Record. :slight_smile:

4 Likes

From Choice’s submission:

CHOICE is concerned about the assumption the insurer has made: while there may be a correlation between education levels and riskier driving habits, someone’s level of education does not cause them to be a riskier driver.

I think this may be off the mark. The insurer need make no assertion or assumption about either “causation” or “correlation”. (Both of these may be interesting research questions but that could be outside the scope of an insurance company’s level of interest in the problem.)

“causation” is clearly hugely problematic but if it is claimed that an insurance company has made such an assertion then I think a citation is needed.

“correlation” is trickier. Perhaps the insurance company can show a statistically significant correlation. Within the limits of being able to classify customers accurately and having accurate data in order to do so, it would be difficult to argue with the mathematics. Based on some of the ideas presented later in the submission, I think that could leave you in an uncomfortable position.

However even “correlation” is unnecessary. The bottom line is that insurance companies calculate premiums by looking at historical claims data. They have a right to make a profit and if the claims data says that they need to increase premiums in order to do so then so be it.

It is arguable that insurance is not a human right but let’s put that to one side and look at potential policy responses.

Community rating

i.e. a company is forced to apply the same insurance premium uniformly across the whole community regardless of actual individual risk.

There are several general problems with this.

  1. It is forcing the low risk customers to subsidise the high risk customers.

  2. It creates no real incentive to be a low risk customer.

  3. Since it fundamentally misprices risk, it creates a market for a company to sell only to low risk customers. So this policy response can only really be workable if a company is further forced to sell to all customers regardless of risk.

Whether you could justify such market intervention would depend on ideology and the importance of the particular insurance.

Limiting Collection

I think your submission already acknowledges some of the difficulties with that. However some of those difficulties would be:

  1. Jurisdiction - the Australian government either in law or in practice finds it difficult to control data that is collected or held overseas. It would be difficult to roll back the tide of globalisation. Possibly treaties for cooperation between countries would need to be established before you could even make a start on that - and that assumes that you can get multilateral agreement on appropriate standards to apply.

  2. Transparency - it is difficult to know what data is even collected, let alone how it is used.

Companies will obviously not be keen to make public the details of how they price risk.

Again, whether you could justify such intrusion by government would depend on ideology and importance.

3 Likes

I am happy with the submission. It is also good that Choice has tried to address the Privacy issue.

In today’s day and age, many consumers wish to enjoy the use of technology, but may not agree with the loss of privacy which is attached to its use. Whenever one accesses technology, it collects data on the consumer whether it is an appliance (TVs, smartphones, PCs), loyalty scheme, financial product (e.g. financial institutions tracking purchase patterns etc) or when one accesses the internet (through cookies or information exchange for signing up to a ‘service’).

Currently when one use technology, one has no choice but to sign away their data/privacy, as this data/loss of privacy has monetary value to those who collect it.

Unfortunately technology has advanced at faster speed than the legislation/policies of government and maybe it is time to put the cat back in the bag.

6 Likes

Hi there!

Thanks so much for your input.

The key point that we make in this submission is that unfair discrimination should not occur. People should not be unfairly penalised because of insurers’ assumptions. The example we use concerning education levels is useful to consider because two cautious drivers (for example, one who has not finished high school, while the other has a Masters’ degree) may be charged at different rates. If they are both cautious drivers, have no history of accidents, why should their education levels influence what they pay for their car insurance?

You mention that insurers “have a right to make a profit” - this should not be at the expense of the customers that they serve.

5 Likes

Hi Mark,

Thanks so much for your comments!

It is a lot of work reading through all of those documents, isn’t it? While there were 10 discussion questions, we didn’t respond to all of them in detail because there are other groups that are better placed to respond to some of the issues that are raised. CHOICE tries to stay focused on consumer issues in our submissions, and in this particular one we focused on a number of key topics like unfair discrimination.

Your suggestion about articulating outcomes is a useful one: the AHRC is actually going to be doing a lot of research off the back of the submissions they received (including ours) so they may look into this further. The purpose of our submission is to outline current and emerging issues. This helps the AHRC develop the scope of their research, and develop key recommendations.

Regarding your last question: ‘As well as a no harm test, a shared benefit that improves the future of all might assist I deciding if a new technology should be accepted?’ - where would this put entertainment?

3 Likes

Thanks for your comments @boblorel - I think some of your points speak to ‘equality of outcome’ which is a theme I raised in CHOICE’s submission.

The pace of change is so rapid, it’s important to make sure that technological change isn’t leaving people behind!

3 Likes

Not to labour the point too much but in the hypothetical example that I am presenting …

  • there are no assumptions - it’s just what the claims data shows

  • education level should influence insurance premium because that’s what the claims data shows

“unfair” is a very difficult word to define. In the context of insurance I would take “fair” to mean “backed up by the claims data”.

“Everybody” knows that adolescent males do stupid things and that this leads them to have accidents when behind the wheel of a car. Assuming that this can be backed up by the claims data, is it “fair” to charge adolescent males more for insurance? This is discrimination on the grounds of both age and gender. Is it “fair” for one particular adolescent male who happens to be very sensible and has never had an accident to be charged the same as all other adolescent males?

How are “age” and “gender” any different from “education level”?

In theory government legislation could provide an exhaustive list of the attributes that insurers are allowed to use to partition and analyse claims data. I am not advocating that.

6 Likes

TV’s out, reading’s in! Computer games have provided enormous benefit to me when I was young, in improving my coordination (which was dreadful and remains poor). Cinema? In, as a social event.

Sports? Well ‘no harm’ rules out all ‘sports’ involving animals, and the ability of young children to play contact sports. Those contact sports would be available to people who are able to make an informed decision. Do sports provide a ‘shared benefit’? Yes, inasmuch as they bring communities together.

All forms of gambling would obviously be out based on ‘no harm’ as well as ‘shared benefit’.

Recreational drug-taking would change significantly. Alcohol would have to be banned along with tobacco, based upon the ‘no harm’ and ‘community benefit’ definitions. Marijuana? Has some benefits, and lacks the anti-social aspects of alcohol and tobacco - as long as it is not smoked.


Now, to the paper. I have not read it all, and have gone much broader than either it or the Choice response (as shown at the very end of this post).

  1. I am concerned at the makeup of the AHRC’s project partners and Expert Reference Group. The question is about human rights - both groups are dominated by industry and government (I include most of the named academics as being more representative of industry than either academia or humanity). Journalists have zero representation!
  2. A major technological concern is of algorithms. These are one means of getting to artificial intelligence, and are used extensively to say in effect that ‘if person x is of age y and income z, and lives in area n, then they are entitled/not entitled to the loan for which they apply’. They make assumptions about people based upon broad generalisations that are accurate to a point but can often be discriminatory. I remember the New York Police Department had a problem with using an algorithm that - without being intentionally designed to do so - discriminated against African Americans. While algorithms can be useful, they are not a perfect substitute for human intervention, and any algorithm will contain the unconscious biases of its creators. I suggest that there should be appeal mechanisms against algorithmically made decisions. (I see that this issue is discussed on page 29 of the paper.) (In the last paragraph of page 33, a hypothetical is presented about a discriminatory judge. This in fact was the case for many years in an Australian jurisdiction where a judge’s child was killed by a drunk driver and the judge was known to be severe on drunk drivers.)
  3. Ability to opt out. Unless a service absolutely requires you to provide or approve its collection of data in order to perform its core functions, the user should be able to opt out. This opt out should be able to be made at a granular level. Example: if you have the Uber app installed on your phone it is designed to track you everywhere. The user should be able to set this to only track while the app screen is displayed.
  4. We want and demand transparency of government and business, but often do not get it. Article 19 of the UN Universal Declaration of Human Rights (UDHR) states that “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers”. This right is increasingly under threat, as seen when police raided the offices of politicians after information was leaked, and the office of a human rights lawyer when he revealed that Australia was spying upon East Timor in relation to sea licencing. I suggest that any submission should look to reducing the power of government to spy upon its own people - a power that has been greatly increased by the rise of the Internet and by ‘anti-terror’ laws that have been used for other purposes. Journalists and their sources require greater legal protection than is currently given them, along (dare I say it?) with lawyers and their clients.
  5. Any person accused of a crime should have the right to obtain details of how evidence has been gathered, including whether it has been gathered unlawfully (e.g. with Stingray or similar devices). It is important to note that US prosecutors have dropped cases entirely rather than reveal the use of these.
  6. Where technology is required to effectively participate in society, there must be alternatives for those who are unable to use this technology. (This reiterates what has already been said by others.)
  7. The issues paper refers to the International Covenant on Civil and Political Rights (ICCPR), and to the possibility that the use of technology for surveillance purposes can be overly broad, impinging on the privacy and reputation of innocent people. We have already seen this, and I have referred to it in point 4. Article 12 of the UDHR similarly states that “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks”. I think it is clear from the Snowden documents that this principle has been violated not just by the US but by Australia and other countries. We need laws that stop indiscriminate collection of personal data.
  8. We must have access to, and the right to correct or delete data that is stored about us, as well as metadata. This right is obviously going to vary depending on the data and its purpose, but is part of the right to privacy.
  9. Devices in the home - computers, mobile phones, Amazon Echo, Google Assistant etc. - must not be able to collect data about the user without explicit consent on each occasion. They must not be able to collect data in any form without specific activation on each occasion they are called into use. This may vary for health products (e.g. an insulin pump or pacemaker).
  10. Section 4.1 suggests that some imply access to the Internet is a human right. I would argue that it is undoubtedly a human right, and all Australians should have equal Internet availability.
  11. Part of the technological revolution includes a growing understanding of people and what drives us to make decisions and choices. This is useful, but also incredibly dangerous - as seen in the US by the rise of the ‘alt-right’ (there’s a long story behind it, that I won’t go into here). Humans need to be involved in decisions about how to use such knowledge, and not just advertising executives or politicians but also representatives of the rights of individuals (clearly not politicians).
  12. Primary and high school education must include courses on the ethical uses of technology, and on how to communicate online.
  13. The paper proposes a ‘Turing Stamp’ to show which technology providers are certified as deserving of trust. This sounds like a new opportunity for something like greenwashing, where companies gain a certification that in reality means very little. I would prefer the application of Asimov’s four laws of robotics - using a broad definition of ‘harm’. (Oh boy - that Wikipedia entry is a rabbithole!)
  14. An issue that is not discussed as far as I can see in a brief glance through the paper is that of human rights to dignity and employment. It is well known that people are being displaced from work by machines and/or by shifting jobs to lower-paying parts of the world. Australia needs a basic income. We also need to stop relying upon volunteers - my mother said that when she dropped some lost property off at a police station recently she was served by a volunteer! This is not appropriate at all - we need to fund such public services adequately. We are seeing the opposite of what was predicted in the mid-20th century, of people working two or three days/week and having crazy amounts of leisure time; instead people are working unpaid overtime and working even at home! Australian laws should protect employees from this kind of exploitation.

Oh boy - I’ve made it to page 44, and the consultation questions. I think I have addressed most of these, but would add that voluntary industry cooperation is insufficient; we need legislation and an enforcement body that is not controlled by the industry.

I see that there is an appendix listing government innovation and data initiatives. It basically shows that the government is spending bugger all on these, and has scattered them across portfolios rather than providing central coordination.


Finally, I’m going to glance through the Choice response.

  1. Recommendation 1 should address differential pricing in all parts of the economy; it is not just insurance companies that charge different rates to different customers.
  2. Recommendation 2 should address all sectors of the economy, not just financial services.
  3. Recommendation 3 should state that individuals must have control and ownership of their health and DNA data.
  4. Recommendation 4 applies to all sorts of industries. I found recently that recruitment agencies routinely and automatically scour the Internet for information about their customers, when I was asked about a photo of a pet.
  5. The CDR should be expanded to more industries. Health and government are two obvious ones; large supermarket chains with their loyalty cards. Probably others. And yes, access should be free (this comes with owning one’s own data).
  6. I absolutely agree with using the EU’s GDPR as a model.
  7. Going big, Australia should seek, by referendum, to incorporate the UN UDHR and ICCPR into its constitution. It is a signatory to both, it should demonstrate its commitment to them.
5 Likes

Or, swallow our pride, and sign up to it - even though we are not part of the EU. That way a large number of businesses would already be compliant with “our” newly minted regulation. (Unfortunately we would still need to give other businesses X years to become compliant.) That way businesses who operate in both jurisdictions wouldn’t have to waste time and effort nutting out the differences between EU and AU and special-casing their systems and processes.

4 Likes

I disagree with both of these points.

There ARE assumptions. The main one being presuming the less educated person MUST fit that correlation, regardless of what the hard evidence (their safe driving record) says. This is a human rights issue because it punishes someone already disadvantaged who has exactly the same performance as someone better off.

For this reason it’s vital education level doesn’t affect insurance. It simply locks poorer people out purely because they’re not from a university background.

1 Like

Re “No harm, shared benefit test”

If you are suggesting this could be an impediment to new or changed forms of entertainment, you also need to define the scope of what is entertianment. The definition of entertainment is very broad, from devices such as 3D holography to a genuine version of role play as in “Westworld” the movie.

In assessing change the same rules could apply, although there may also be a stronger moral argument concerning what some of us consider acceptable behaviors.

The framework on how we could be responding to this or any similar question exists in the prior suggestion -

The future of technology is uncertain. Any solution intended to respond to the challenge of change needs to be adaptable. It cannot predict outcomes, nor can it restrict thinking, however it can be required to ask permission?

Drawing on history it is unlikely that the lions and their victims who provided spectacle for the ancient Romans were afforded such consideration. It is asserted that the spectacle increased the closer the end of the empire. I wonder?

P.s.
None of this is about reassessing what we do today, whether it is good or bad. It is all for what might come next. I could argue historically that if online gambling or betting was subjected to an ask for permission, public review, and assessment process perhaps it might not exist today. It has been enabled by technology. There are many aspects of how it functions that feed on poor decission making and emotional insecurity in an environment that is neither transparent or managed.

It may be unreasonable to ask for consideration or controls over the impact of technology on some aspects of how we manage our lives but not others. Arguing that AI for assessing insurance risk needs to be unbiased is no different to arguing AI should not target individuals more likely to become addicted to gambling.

3 Likes