AI Chatbots like ChatGPT and their potential consumer applications

Thanks for the link. A well written article.

1 Like

Sooner or later someone was going to look under the hood and see what was open.

One expert put it succinctly,

Julia Powles, an associate professor of Law and Technology at the University of Western Australia and the director of the Minderoo Tech and Policy Lab, says programmers have to train AI technology such as ChatGPT to behave ethically.
“These are not reasoning machines, they’re word-prediction machines,” Dr Powles explains.
“Because they have no concept of what the words they generate mean, they simply have no capacity to reason ethically.”

“Upgrade” at your peril!
“The ultimate upgrade” – Doctor Who & the Cybermen (parts 1 & 2) | BioethicsBytes

2 Likes

Bring on the Butlerian Jihad.

Fun with AI. Trust me! :rofl:

A wonderful example of the world’s growing industrial might. Remebering that in every contested case, one or other of the lawyers stuffs up and loses, A.I. simply provides new help in this process. As happens all the time, lawyers present the wrong law, or the wrong cases; A.I. now allows them to do so on an industrial scale.

How long will it be before the AI bots displace humans? The silliness of many internet articles already shows how inane they can be, but Microsoft might get the gong. How did I miss this?

edit: Another spin to the story

1 Like

As with a lot of technology AI is being sold on its smoke more than its smell. This is not the only recent report about blind faith in what is at the end of the day a very fancy search machine coupled to linguistic translators.

There is promise of AI but the terminology is being applied to ages old rules based processes as well as gaming methodologies, not just the emergent futures of protagonists hope and expectations it can and will do, save for refusing to ‘open the door’. It is not intelligence nor sometimes more accurate than a first week staffer having had an hours training.

There is more to this case than the present limits of AI. How did such accusations get published without a human checking them? What kind of review mechanisms do they have that allow this to happen?

In another 30 years we will have a better idea what areas AI is useful in and how to verify its material. There is going to be much heartache if AI is found to be unable to meet reasonable standards in some areas where the marketeers so desperately want it to and where they already assume it will work.

In future I can imagine self-guided vehicles could have big disclaimers that in some environments all guarantees and promises are voided, you have no insurance and you are on your own.