World Consumer Rights Day: Australians want AI regulation

Today is World Consumer Rights Day (15 March), and the theme for 2024 is fair and responsible AI for consumers. We recently conducted research around consumer expectations for AI:

New nationally representative research from CHOICE highlights the wide gulf between consumer expectations of AI regulation and the current state of play in Australia. The survey of more than 1000 people found that almost 4 in 5 Australians believe that businesses should have to ensure their artificial intelligence system is fair and safe before releasing it to the public, yet no such requirements exist in Australia.

The survey also found strong support for the role of government in ensuring AI systems are fair and safe, with 77% agreeing that the government should require businesses to assess AI risk before being released to the public and, further to this, 3 out of 4 agreeing that businesses should also be required to prevent AI risk before release.

Read more here:

2 Likes

I agree with the article regarding Fair and Safe. But I think AI requires an explicit definition, with essential, mandatory, provable criteria that must be met before a claim of AI can be met. For example 'Expert Systems" use AI type algorithms, but to do very specific roles.
In some ways ChatGPT is an Expert System, eg, it provides text responses to questions about what? based on what? is it more than mere opinion, etc?
@RafiAlam

2 Likes

One of the early tests for what could be considered AI was the Turing test. That involved a human interacting with a computer program and asking questions and getting responses. If the human could not determine in any way whether they were talking to a computer program or another human, then the test was met.

We are a long way from that.

For now, AI is just a buzzword used in many contexts where it is not true General AI. It is just computer programs doing expert things. Like beating humans at chess, or reasonably being able to drive a car, or intelligently searching and processing vast amounts of data.

Or one of the current big concerns. Scanning images of people, and deciding whether that person required some action. Something boring for humans, but a breeze for computers to do.

1 Like

I second the call on @RafiAlam. Looking forward to further news as this year the theme of the World Consumer Rights Day is “Fair and responsible AI for consumers”. Would be appreciated :pray:

1 Like

As @Gregr states, what we have at the moment is not AI regardless of how it is labelled. If we get rights to control AI, systems such as ChatGPT can simply be reclassified as ‘expert systems’.

Before pushing for AI regulation, we need to define what is to be regulated. Whether it is referred to as ‘expert systems’ or ‘AI’ or simply ‘generative computer models’, the regulatory framework needs to define what it intends to control. For instance, Tesla Autopilot, or all cruise control? The smarts that tell a car when the key is nearby? The washing machine system for automatically deciding which cycle to use?

2 Likes

Thanks for reading the article everyone and happy (belated) World Consumer Rights Day!

We definitely agree with a definition for AI - we called for it in our submission last year:

Legislate a clear, fair and commonly accepted definition of AI. This definition should be technology-neutral and future-proofed for emerging applications of AI.

At the moment, the government seems to be using a definition based on the ISO one - in their discussion paper on supporting responsible AI, they used this:

Artificial intelligence (AI) refers to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation.

5 Likes

Assume there is a definition of what this means?

If there is any opportunity to blur the lines? The digital revolution has shown just how adept the shakers and movers in the Gig economy are at redefining the world to their benefit.

“On a red traffic light - Stop” is explicit.
Code can be added to determine the distance required and braking effort.
Good code would include “action on Yellow” with a similar calculation that determines if one should proceed or prepare to stop.

All very explicit. However how does the system know or identify the traffic light and signal conditions. In some instances it may be through direct communication with the traffic light control system. The outcome should be identical for every vehicle with the same code/programming.

In another version it may be a system that can visually assess the environment, locate the respective lights and interpret the signal conditions. If there is an interpretive component is it no longer explicit? The outcome may vary slightly each time for the same or different vehicles with the same code. Differences in lane approach, weather, vehicle cameras position, changed view points on the day open the possibility one vehicle will choose to slow and stop while one immediately adjacent will proceed. The greater risk is a misinterpretation by the code of what should happen next. Is it even possible to test every permutation to prove the code and supporting devices cannot err?

To note - it is often said “to err is human”.
Is the ultimate test of AI having gained the equivalent of human reasoning the capacity of AI to also err?
Should the performance of any AI be measured on its capacity to err with a greater or lesser probability than a human?
If that were the test which human becomes the point of reference on which acceptance or rejection should be based?

Hypothetical or is it! Robotic vacuum cleaners and lawn mowers becoming more common may fall close to the dividing line. Both have potential to cause harm.

That definition of AI could apply to a toaster. Predictive output without programming, with a level of automation.

Very little of what is termed AI has anything to do with intelligence. It is just the sheer speed of processing data and rules, and adaptive building of knowledge and rules, that gives the impression of intelligence.

If the data is faulty, or the rules are faulty, or the adaptive building of K&Rs goes off in an undesirable direction before humans intervene, then we can have problems.

Researchers in true AI, called Artificial General Intelligence, think we are still at least a decade, possibly a century or more, away.

For now, human supervision and control of these primitive systems is a must.

1 Like

The examples given are not remotely like the output of a toaster and the output is at some time in the future not predictive of it. Predictive does not mean predictable.

Hence should one NOT set the robo vac on its daily mission to sweep the kitchen floor while one is preparing the hot evening meal? One trip over it with a sharp knife in hand or with a carefully held pot of boiling water could end in serious harm.

When mowing the front lawn many with covenants these days are not fenced. Hopefully they have some form of barrier line to keep the robo mowers in. There’s however often little to keep wildlife, pets, stray children out. If all goes wrong, is it the robo smarts to blame or the user for not keeping others from harm?

@RafiAlam This article seems useful in aiding getting a better definition of AI. I think it needs explicit testable criteria to enable a claim of AI. But this could be a good start.

https://www.digital.nsw.gov.au/policy/artificial-intelligence/a-common-understanding-simplified-ai-definitions-from-leading

2 Likes