This is what Australia needs to do to regulate AI

image

We need strong laws and well-resourced regulators to make sure consumers are protected from the possible harms of AI.

CHOICE Senior Campaigns and Policy Adviser Rafi Alam

Once confined to academic papers and science fiction, 2023 seems like the year artificial intelligence (AI) officially moved out of the research labs and into the consumer market. AI-based tools ChatGPT and DALL-E have become household names, and AI promises to deliver both productivity and fun.

But although AI has its benefits, we shouldn’t ignore the risks. Businesses are looking to AI to increase profitability, often at the expense of consumers.

The risks of artificial intelligence

Our investigations over the past year have found that facial recognition technology has made its way into retail stores, pubs and clubs and stadiums. This technology lets businesses automatically refuse access to people based on identity databases, but experts have found a startling rate of inaccuracy, especially for people with disabilities, and people of colour (particularly women).

AI is also being used to process more data than ever before. Businesses even use algorithms to make decisions about how much we should pay for things, from our groceries to insurance or subscription plans and even our home loans.

But when pricing decisions are entirely automated it can lead to discriminatory outcomes, such as higher premiums for people from marginalised backgrounds or increased prices for older people.

Generative AI like ChatGPT comes with its own set of hazards. Chatbots using ChatGPT can replicate false information in their answers or provide dangerous advice. The Federal Trade Commission, the US’s competition watchdog, is currently investigating whether ChatGPT has harmed people by creating false information, and is also looking into its privacy practices.

What needs to be done to protect consumers?

AI laws should be risk-based

Experts have been sounding the alarm on these risks for quite some time, but governments around the world are only just catching up. Australia is now running its own consultation on AI, and CHOICE has just submitted our suggestions on how the government can protect consumers from these risks.

At the heart of our submission is the need for a risk-based approach to AI, just like the European Union is proposing. A risk-based framework categorises AI activities from those that are considered minimal risk and therefore require few limitations to those that are high risk that are restricted or even prohibited.

We also suggested that our AI laws should codify consumer rights to safety, fairness, accountability, reliability, and transparency.

The federal government should also strengthen existing laws like the Australian Consumer Law and the Privacy Act to ensure people are comprehensively protected from AI misuse or exploitation.

Strong regulators are essential

But making new laws isn’t enough – we need strong regulators to enforce these laws. CHOICE is calling for a well-funded AI Commissioner with a range of regulatory powers including civil and criminal penalty powers.

An AI Commissioner should leverage their specialist expertise in collaborating with existing regulatory bodies responsible for overseeing sectors of the economy that are impacted by AI, such as consumer rights, competition, privacy, and human rights.

Big tech wants to regulate itself, but history proves these businesses can’t be trusted to write their own rules. Australia should follow the lead of the European Union and Canada and lay down the foundations for a fair market where businesses must guarantee safe, fair, transparent, reliable, and accountable AI systems before releasing them.

Not only would this protect our community from harm, it would also encourage innovation and promote responsible AI use.

You can read our full submission to the government here.

9 Likes

Not sure I consider the things mentioned are the domain of what is called ‘artificial intellegence’.

It is more the power of data processing and storage and using that data for real-time analysis and decision making.

And applying ‘expert’ rules to essentially mimic what a human would decide based on the data. Well, many humans are hopeless at decision making, and if that shows up in the rules, then you get garbage. Computers are just far faster at processing data, and much more of it, but in the end, if the rules are wrong, garbage decisions.

2 Likes

Gregr,
I, like you, believe some of the items[examples] mentioned above are more related to “real time analysis and [rules based] decistion making”.

But where ‘artificial intellegence*’ is employed it should ONLY be allowed to be stored and shared with a ‘watermark’ to show its ACTUAL source. i.e, NOT human but computer generated.

Look, even I can do a good mimic of “the great yellow-haired American” but AI tools can do it infinitely more realistically and have fooled the best of us. Not that I listen to “the great yellow-haired” one but some who do should be informed it isn’t him. Otherwise, we’ll all begin to believe there is a $3 bill!!!

  • [Intelligence] exhibited by an [artificial] (non-natural, human-made) entity.
2 Likes

Thank you Rafi for your article. Very timely for people to understand the issues.
George

2 Likes

AirBnB, AI and competition or is it the beginning of the end of competition?

There’s a suggestion that AI will follow the path of Google.

worries that “consolidation may be more likely than competition” as AI learns from itself and users drift towards the better-performing platform. “This has implications for geopolitics as well as productivity,” he says. Unprecedented power in the hands of one company.

1 Like

Grrrrrrrrrrrrrrrrr. As an IT professional for 35 years, I could not DISAGREE more with this article.

The government has no place in regulating the internet as they have no right to it.

Welcome Shannon to the merry place we call the forum.

Could you explain a bit more about the problems that you see, my reading of the article does not say to me that they intend to regulate “the internet” but some commercial operations that use the internet.

How would you treat with new businesses like Airbnb or Uber? What about the introduction of AI to businesses? Should it all be left alone, leave them to do as they please?

2 Likes