Artificial Intelligence Developments

I deliberately asked ChatGPT using the word ‘bad’ , something that has many different meanings and nuances, and did not provide further context as to which meaning I meant.

The two generative models very different. I’d say there is a long way to go.

To stir the pot, if the goal is to replicate human intelligence by artificial intelligence wouldn’t bias be inherent to the product, similarly to as it is from human to human? It seems trying to eliminate bias in the ‘output’ requires so many caveats or ‘this may be right, that may be right, maybe something is right’ that it would become confusing and thus meaningless, or minimally suspect?

‘AI’ (as currently referred to) is already recognised for spreading conspiracy theories so the genie is already out of the bottle. Some already accept conspiracy theories about AI and ‘AI’ is already used to produce deep fakes and propagate same.

The argument about combating conspiracy theories by showing the target truth somewhat falls apart when one admits most people believe what they want to at some point, be it truth, falsehood, fiction or complete fabrication (if one wants to split hairs of anything less than true).

Laws? Chuckle.

3 Likes

The porn industry has long been a leader in adopting technology, and now some miscreants are using AI. Sad but true that is what seems to be triggering government and industry interest in reining it in.

Who is this Taylor Swift? Is she related to Taylor Dayne at all? (I suppose you can’t fight fate.)

A bit of AI (or at least chat-bot) humour, or is it?

2 Likes

Stephen Hawking is probably going to be proven right. Humans are building a rabbit hole and jumping in, eyes wide open. This more than just impressive, it if frightening.

3 Likes

‘We’ worry about the consequences of ‘net down’ but what about the potential consequences of ‘AI Off and Running’ if civilisation or just a few businesses come to depend on it?

3 Likes

Its been a surprise to my friends who know I love tech… but AI as it is, is a step too far, for me. I don’t trust most of what I read, these days… Its becoming more and more difficult to sort wheat from chaff.

On facebook, for example, since GPT became available in one form or another, there has been a slew of long-form posts which appear informative but the language is… i dunno… slightly “off”. Does not quite ring true. I would not know if the content were accurate or not, but it feels “wrong” in some way. So, whilst I may read some which appear interesting… I take it all with several grains of salt.

4 Likes

Is the risk only those well informed on a topic or subject are likely to notice, if at all?

1 Like

In my tests with ChatGPT, the system appears to generate output to questions that seem to have impecable spelling and grammar.
Whether the output is factually correct, or not, is up to one to check from other sources.

Note spot my spelling and grammar mistake(s). Deliberate.

1 Like

I don’t know. The things I’ve read have been on topics I know nothing about.

2 Likes

This seems like a good application of AI - accurately translating medical jargon into plain language:

Could AI be trained to read doctors’ handwriting, too? Please? :wink:

2 Likes

:wink:
In the modern era of digital practice management there are fewer and fewer GP’s not using digital tools to issue scripts or referrals etc. Even the more experienced we see have adapted to the convenience and speed. It’s to wonder at whether the more recent graduates have acquired other than a digitally skilled mindset.

If AI can achieve the suggested skill of reading, perhaps we should be wary of it taking over and offering the cold metal finger at the next internal examination. :roll_eyes:

1 Like

Sorry, but that’s asking the impossible until we get decent quantum computing power.

1 Like

CoPilot (and other AIs) could be very useful for checking data security! Businesses should compile a list of questions they really don’t want their AI to be able to answer, and then have an unprivileged user run through them. Then fix the security holes.

If your organization has low visibility of your data security posture, Copilot and other gen AI tools have the potential to leak sensitive information to employees they shouldn’t, or even worse, threat actors.

Copilot’s security model bases its answers on a user’s existing Microsoft permissions. Users can ask Copilot to summarize meeting notes, find files for sales assets, and identify action items to save an enormous amount of time.

However, if your org’s permissions aren’t set properly and Copilot is enabled, users can easily surface sensitive data.

So … use the AI to find the permissions that aren’t set properly …

To stir the pot - not all AI is created equal. IE the performance of an AI tool/platform can be task specific. Training and the data sets it has access to can change everything.

Microsoft points out:

We have many examples of the failures of AI, but what happens when it succeeds?

I expect specialised AI to become successful and to alter our society sooner and in the short term more significantly than general purpose. I am talking about Oz and similar countries here not a world wide scope where patterns of work can be different.

A most condensed review of the history of employment trends over the last 500 years looks a bit like this.

  • Before the industrial revolution a very large proportion of labour (80% or more) was used in food production, people would dig and sow and reap and mow by hand and it wasn’t too efficient. It was the kind of work that could be done by almost anybody, in some situations it was an advantage to have a strong back, in some there was finer work that many who were not strong could do.

  • The industrial revolution meant the mechanisation of farming, today a very small proportion of labour is in that sector (2-5%) and reducing. A person with limited capabilities can still get a job driving a tractor. The IR brought factory work to the masses in place of farm work.

  • Factories and then mass production provided employment for very many, you could put parts on cars going down a chain line if your command of the local language was poor and you had no education. Mechanisation also turned wagon drivers into truck drivers, another job that does not require great education or command of language. Similarly humans rarely dig trenches any more, construction has become mechanised too.

  • Sending manufacturing offshore or mechanising it further has reduced the numbers employed in that sector and so we were told the great unwashed could then get jobs in the service industry; aged and other care, call centres, sales, bicycle courier or flipping burgers. But the service sector isn’t for everybody as big chunks of it require communication and interpersonal skills.

We now have situation where lower level AIs are used in restricted environments; driving trucks in mines, assembling cars in factories without pesky humans to get in way and other manufacturing. John Deere has introduced autonomous tractors. Any repetitive work is being examined to see when a bot can do it. The service industry would dearly love to get rid of people answering phones, it seems calls centres were a short-lived aberration.

So what happens to those who have limited capacity, who did the physical and repetitive work on the farm or in the factory? They have already joined the ranks of the unemployed, the underemployed, the exploited in the gig economy or the low paid drudge work of the service sector.

In the western world future economic activity is planned around the fact that machines are cheaper than people. More and more basic tasks are going to be automated. The options for meaningful work at the lower end of the spectrum will reduce.

Employers don’t want to employ people they want to make money. Capital intensive industries will grow and labour intensive ones will shrink, those with more money will get more, those who have to work for wages will not if the current system continues.

Will our government continue with the policy since the 1970s that a constant underclass on the unemployed is necessary for the economy to prosper? Maybe they will go in another direction if they realise that machines don’t vote but the unemployed do.

Because our leaders have such limited vision and because the current wisdom of economists has no answers my fear is not that AI will continue producing amusing failures but that it won’t.

1 Like

Another way AI can serve its masters.

Harnessing the shared collective thoughts to help train AI.
The retrospective use of content as far back as 2007, and no way to opt out -

includes your posts, photos, captions, and messages to Meta’s AI chatbot. They will not be using private messages.
……
As of now, only users in the European Union and the US state of Illinois can opt out, because they have AI protection laws like the General Data Protection Regulation (GDPR)in place.
……
When asked on the street, most Australians responded with “I did not know that” – and they weren’t happy.

1 Like