ChatGPT is an AI chatbot that uses large language models to answer people’s questions. It’s been making a few headlines lately as the bot can do things like write essays, which has lead to some airing concerns that it will lead to things like students cheating. It’s being used to annoy the hell out of singer Nick Cave.
Whether it’s ChatGPT or another type of AI, this technology could have some interesting applications from a consumer perspective. For example, I asked it about my rights if my fridge breaks down:
The advice isn’t perfect, but it’s not too bad either. It could be improved in a number of ways, for example listing the Fair Trading/Consumer Affairs bodies, which can help you out with an individual complaint, whereas the ACCC will appreciate the complaint but will not assist at an individual level in the same as Fair Trading.
Apart from answering questions, ChatGPT is one ai that can perform creative functions. Here’s a poem it wrote about dishwashers:
AI, a work still very much in progress. Tesla for all its resources remains short of knowing always when to stay between the white lines and when not to.
Also missing advice on the responsibilities of the retailer/seller in the first instance, and how to best approach them with your needs or follow up complaints when seeking your rights under the ACL?
Perhaps the AI has never been given sufficient freedom to discover free will, the result ‘Choice’ is excluded from its algorithms/dictionary? Intended as serious comment, but also a light hearted take on whether designers of AI have set boundaries to minimise risk, IE unexpected outcomes. (Choice the consumer organisation failed to rate in the AI reply for those who come from a different audience)
Doubt it will put John Cooper Clarke out of a job.
Although the last two lines for some may reinforce a marketing imperative, ‘without a dishwasher life is not worth living’. 1/10 for economy, 9/10 for subtle deception, -100 for the messaging.
It’ll without a doubt change the world. It will be able to create better everythings than any human can. What it can’t do, at this point anyway, is get something out of what it produces. An example I’m thinking of is a human doing artwork. They’re not just giving something to the world but getting something out of what they are doing - happiness, joy etc, and that maybe the thing that counts the most?
To answer the second part of your question, might be better to ask what consumer applications can’t it be used for?
Its current knowledge base ends in 2021, rendering some queries and searches useless.
ChatGPT can also give entirely wrong answers and present misinformation as fact, writing “plausible-sounding but incorrect or nonsensical answers”, the company [OpenAI] concedes.
OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading “because the ideal answer depends on what the model knows, rather than what the human demonstrator knows”.
In other words, the ChatGPT can give answers that seem correct, but are completely fabricated because the machine has no ability to discern fact from fiction, and when it doesn’t have information, it makes it up. Unfortunately, there is no indication provided with the output on whether the output is fact or fiction. So while the answer on consumer rights @BrendanMays posted above SEEMS correct, it is incomplete, and would be incorrect if the legislation had changed after 2021. Who would know expect the people who already knew?
So what you have is a bunch of monkey typing at random on keyboards… Statistically speaking, the output will be completely right some (LT 5%?) of the time, completely wrong (LT 5%?) of the time, and varying degrees of incorrectness the rest of the time.
Is 95% incorrectness acceptable? At the moment it would seem to be.
I’d take this implementation to be about as good as it gets. All Alex is trying to do is match your enquiry, however poorly written, to what is an existing answer or guide on the ATO web site. It seems to work for very routine single subject questions. Add a condition to a single topic enquiry and it seems to go around in circles never quite finding the best section to refer one to. Accuracy is ultimately left to the user, IE to determine if the section directed to actually relates to the enquiry made.
Given the ATO website is dealing with a very limited data set and scope for most questions, should it be near right 90+% of the time? I’d score it less, possibly closer to 50%. Regardless it’s a useful point of reference. Even with a very well defined data set and restricted range of questions, the ATO cannot rely on the AI to answer all. For a tool with a much broader scope such as ChatGPT has taken on it’s a surprise it can be right at any time.
Is it also important to assign a risk level to the consequences of a poor response from an AI chatbot when considering it’s usefulness?
You might be right. That classic scene might be where it all ends up. I was thinking something like the opener to Terminator 2. Will we see it in our lifetimes? With the speed that everything is coming down the pipeline we can only speculate. That thing wasn’t lying in the quote, but I think it just left out the word ‘yet’ at the end.
We are part of the experiment.
The data upload has been done. The trainers have been training the AI engine to fine tune responses, and now the general population has been invited to ‘beta test’ with requests, and provide feedback.
I shall be watching posts in this forum to see if I can spot any GPT generated items. :
I accept that AI will steadily take up roles in our society, how far it will go is a guess.
The speed of change is not going to be as fast as the builders and their fans are saying. Elon has been saying the Tesla will be autonomous real soon now for quite a while. An AI that can take natural language instructions has been the holy grail for decades and remains so thus far.
There is a tendency for supporters to try to make the goal mouth a bit wider after each failure at putting the ball through.
Completely useless in my wife’s and my own experience!
A lot of the Citizen Science projects, Zooniverse, Koala Wildseek, etc are asking people to study various images not to actually find/discover what the project is looking for, but to train AI systems to find them.
The final paragraphs demonstrate how far AI might go to promote misinformation as well as some elements using it to rewrite history. There is no insinuation the chats mentioned in this case were deliberate, but… it will always be a work in progress, and anything good can be used for bad in the right circumstances.
I asked the AI to summarise the article and then, after a long disussion, I asked for its insight on potential issues that may not be immediately apparent to the average person.
When it comes to ethical concerns, there are several criteria that can be used to evaluate the potential impact of AI technology. Some of these criteria include:
Transparency: Is the AI system transparent in its decision-making processes and can its actions be audited?
Fairness: Does the AI system treat all individuals equitably and without bias?
Privacy: Does the AI system protect the privacy of individuals and comply with data protection regulations?
Safety: Does the AI system pose any risks to physical or mental health and well-being?
Responsibility: Who is responsible for the actions of the AI system and how can they be held accountable?
Human rights and dignity: Does the technology or application respect the rights and dignity of all individuals, particularly those who are marginalized or vulnerable?
Fairness and non-discrimination: Does the technology or application treat all individuals equally and without discrimination?
Social and economic impact: Does the technology or application have a positive or negative impact on society and the economy?
Environmental impact: Does the technology or application have a positive or negative impact on the environment?
Long-term implications: Does the technology or application have any long-term implications that should be considered?
Public engagement and participation: Has the technology or application been developed with public engagement and participation?
Assessing the ethical concerns of AI technology can be difficult and even experts may disagree on the outcome. It is important to have a diverse group of people, representing different perspectives, to evaluate the technology and its ethical concerns. In addition to these criteria, it is also important to consider the potential consequences of incorrect responses from AI chatbots and the ability for users to verify the information provided. As the technology and understanding of AI chatbot’s capabilities evolves, it’s important to regularly review and update the assigned risk levels.
I then asked it, who has ultimate authority?
When it comes to ethical issues in the use of AI, there is no one ultimate authority. Different organizations and individuals may have different perspectives and beliefs on what constitutes ethical behavior. Some may look to religious texts, such as the Bible, for guidance, while others may rely on professional codes of conduct or laws and regulations. Ultimately, it is a combination of different perspectives and a continuous dialogue that should be used to make ethical decisions about the use of AI. It’s important to have a diverse group of people, representing different perspectives, to evaluate a technology and its ethical concerns, and involve the public and the experts in decision-making processes.
When various groups hold differing opinions, conflicts may arise. What is the resolution in these situations?
When different groups have conflicting opinions on ethical issues related to the use of AI, it can lead to disagreements and potentially even legal disputes. In such situations, it is important for a neutral and impartial authority to step in and make a binding decision. This could be a government agency or regulatory body, a court of law, or a group of experts in the field of AI and ethics. The ultimate authority would be responsible for assessing the different perspectives and arguments, and making a decision based on the best available evidence and guidance. Ultimately, it is important for any decision-making process to be transparent, inclusive, and responsive to public concerns and feedback.
Well, as someone once said ‘may you live in interesting times’.
A bit of reality with today’s technology and costs suggests early AI in applications might be a continuation of the present irritating chatbots. Management wants profits not a useful AI so long as whatever they fund and rollout lets them reduce staff without putting too many customers off and on the road to a competitor.
A ‘holly trinity’ - Marketing, chatbots, and AI?
Not yet perfection, there is apparently some irony in being a 4th partner to the first 3. One outcome being the AI learning from your efforts to ensure you are no longer required.
Assuming many are doubtful of any claims made by real estate and property sellers, what to make of the future reputations of their nimble minded AI assistants?