Artificial Intelligence Developments

It’s also about the ability of quantum computing to solve problems and find answers consistently. Even where the effects of chaos abound.

AI, a different type of intelligence to those responsible for the chaos, although in this topic the recent references relate to encryption applications.

Victoria does not have a monopoly on recycling challenges. No need for quantum computing, it might just have fewer opportunities to bury the obvious.

3 Likes

It’s a kelpie-size mechanical quadruped they call Spot. I bet the Kelpie is smarter.

2 Likes

How many mechanical dogs does it take to open a door?

Probably none of them but only one of these.

Probably best not to watch “The Terminator” movie tonight.

According to the video one if it has a grasping arm. I had a dog who could open that kind of lever-arm door latch, he couldn’t open round knobs though. He could also stalk you unseen if he knew he would be sent home if seen, this included staying way back until you turned a corner or going the other way round a block, if you went south then east he would go east then south and pick up your trail from the other direction. $$$ Another could bring you each of his toys by name and could be instructed to go to any member of the household by any other member no matter where they both were in the house. Out of all this I suspect robots can open doors.


$$$ Edit, I forgot he could also cross very busy roads without coming to harm, including waiting for the lights to change in his favour. He went for trips on a ferry alone and came home, this was a while ago before fines and rangers collected dogs who were free.

3 Likes

No poop to clean up with the modern version, but those oil stains on the concrete might be harder to deal with.

Can you really teach an old droid new tricks, or is there a memory limit?

Theoryless knowledge”. It’s one thing to get the right answers, knowing why they’re right is another thing altogether.

1 Like

In my mind I can think of a few responses to this.

Firstly, how does general anaesthetic work? Nobody knows, but it does!

Then there is the woman who can smell Parkinson’s Disease. How? She doesn’t know and the scientists who tested her don’t know - but they are working to figure it out, as the condition is quite difficult to diagnose (and she was convinced that one of the control subjects had Parkinson’s, but they were not formally diagnosed for several months after her ‘diagnosis’).

Finally, we already have algorithms with built-in racism! Accidental, perhaps, but there.

To summarise, we have always had ‘theory-less knowledge’. Whether it is the obvious, such as the sun orbiting the Earth, or arcane, such as general anaesthetic. Computers and their algorithmic behaviours simply make the problem larger, by producing ‘answers’ without really understanding the questions. Is this a bad thing? Yes, when police use algorithmic profiling to identify criminals without considering their own biases. What about when the algorithm produces a new antibiotic, though? Maybe a cure for Ebola? Should we worry that we don’t know how it was figured out, or should we simply be glad that we have it?

3 Likes

One of the risks in machine learning not often mentioned is the difficulty of distinguishing between ‘is’ and ‘ought’, or how something does work compared to how it should.

If an AI is taught by exposing it to many real-world cases it will learn what ‘is’. So if it is learning chess it has the clear objective of winning as often as possible and clear rules that may not be broken in doing so. If it wins more often than a competitor it is better and it doesn’t matter at all how. The ‘is’ aligns with the ‘ought’.

In a much more complex situation involving human behaviour instead of simple fixed rules how do we determine if it is learning ‘good’ solutions or the whole gamut of behaviour warts and all?

Human assessment of criminal cases has a problem with racial profiling. It is hard to clarify the difference between increasing the attention given to some suspects because they are more likely to have done it or because they are coloured. Nobody queries looking more closely at person x if he/she is more likely to be guilty but plenty question doing it because they are coloured. What if they are the same?

In some places the sad reality is that a robbery with violence is most likely to be done by a young male of colour. Yet assuming that as a first approximation is called racial profiling. If the AI learns from the real world how will it not learn to profile suspects by colour in some localities? If we don’t understand the inner workings how will we stop it?

Consider serial killers. The great majority are male. Is it sexist to give more likelihood to males than females? In sorting through suspects should an AI take gender into account or not? If gender profiling is a real issue to be avoided the AI ought not.

Regardless of the human issues or sorting out the correct rules for targeting suspects it’s even harder to determine if an AI is following them if we don’t know how it works.

3 Likes

I don’t know if there’s any intelligence here - artificial or otherwise.

3 Likes

Another article on bias in artificial intelligence:


Probably not what was intended by the OP, but:

Artificial? Sort of.
Intelligent? :thinking:

2 Likes

It’s that “all but” that’s the worry.

1 Like

What could go wrong?

https://www.bbc.com/news/av/stories-44614512/when-the-us-shot-down-an-iranian-airliner

2 Likes

Fascinating propaganda video. The BBC should be ashamed that it is hosting such garbage.

Of course, the US apologised and so everything was okay, right? Just as always happens in such cases, right? No harm, no foul? Sort of like the Gulf of Tonkin - oops, our bad. Sorry guys.

1 Like

They felt bad. That’s OK then. Robots will fix all that. They won’t feel bad.

Not sure there’s much in the way of intelligence in that incident - artificial or otherwise.

3 Likes

Not quite on-topic, but relevant:

4 Likes

Actually, I do not trust any spreadsheet I have not personally developed - and for good reason. Spreadsheets are used to support major decisions by businesses, governments and individuals, but are incredibly easy to get wrong. In fact spreadsheet software makers often advise that the spreadsheet is not a tool that should be exclusively relied upon or used as a ‘document of record’ given the fallibilities inherent in them.

I have used and do use spreadsheets regularly, and almost invariably find errors (including, occasionally, in my own). People tend not to reconcile the results back to the inputs in a meaningful way - in the same manner the calculator researchers found that people simply ‘relied on the tool’. Calculators are generally quite trustworthy; other business tools can be magnitudes more complex and if you do not check your input and logic you can end up with answers that are on their face nonsensical.

4 Likes

There has always been an issue with floating point calculations in computers and it still exists today.

Intel suffered a major FPU problem in early Pentium CPUs

For a much much longer read into FP arithmetic see:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

So they all “lie” just they try to make it as little a “lie” as possible.

3 Likes

I don’t dare comment. :neutral_face:

Indeed, spreadsheets are often used inappropriately and not audited despite often being used as the basis of major decisions. I have seen huge systems that ‘grew like topsy’ from multiple linked spreadsheets. They were slow, buggy, had limited data validation, or none, and were so complex they were not well understood (and consequently hard to maintain) and were almost impossible to audit.

My advice was to build a proper database from the ground up to replace the monster. They said sometimes to go ahead but sometimes it was “we can’t afford it”. My reply was “you can’t afford not to”. In some cases they were oblivious to data integrity issues but whined that some reports took hours to calculate. I did the rebuild and it took seconds. Some came round when the lack of integrity was shown to them, or it bit them on the arse in a very embarrassing way, others didn’t because “it won’t happen to us” and I bailed.

The worst offenders were accountants for their blind love of spreadsheets and engineers for their devout faith that they knew better because they were engineers and engineers know these things that’s why we are engineers cause we know and we are practical people, and we do know or we can work it out, always. Except when it goes wrong, who can anticipate these things really, just a freak of nature, it should have been OK. {shrug}

Note that this is not about low level design or execution failure like errors in floating point processors but high level software failure brought on by using the wrong tools and the wrong approach. Which brings us back to AI. How do you validate what the AI says if you have no idea how it got the answer?

3 Likes