The errors were programmed in, and the government was warned about it at the time. Robodebt did not use AI, it just compared two sets of data to a basic list of rules - and was of course unable to know that one of those rules (average income across the year) did not align with the legislation.
Robodebt was human ‘error’ in the same way that we see so many car ‘accidents’.
This is not a new problem, although it is becoming a bigger one as predictive algorithms spread. One example of a long history of poor prediction is in the field of medicine: developers of new drugs prefer to test them on men. Why men? Because they don’t have hormonal cycles, and so are easier to monitor/collect data on. Of course, this makes absolutely zero sense when you are ignoring nearly 50% of the population, but it apparently works for the drug companies’ clinical trials. (To be clear, new drugs do get tested on women - just not as much as or as early as they are tested on men, male mice etc.)
Just how much AI is actually needed for autonomous search and destroy? Likely very little, especially if the drones are mass produced and not particular as to whether the target is genuinely hostile. Everyday somewhere in the world, toy drones are turned out by the tens of thousands.
It might find certain scenarios rather difficult to analyse or predict. Not all politically driven decisions appear to be defined by logic. It may have more chance of predicting the lotto numbers than the seemingly irrational and random responses of some of the more notable politicians.
“Past performance may not be indicative of future results”. In other words, it is dangerous to bet on the future - be it the stock market, lottery numbers, or… ‘events’.
And, finding it impossible to do the job its been set, may well decide that the world has no need of such illogical creatures as humans, and wipe us all out. I refer you to the 2003 version of Battlestar Galactica
P.S.
I’m more a fan of Red Dwarf and the somewhat flawed Holly. We all have different visions of how inhuman or human like AI might behave.
One response offered by Holly to how AI might evolve? Look, I’m trying to navigate at faster than the speed of light, which means that before you see something, you’ve already passed through it. Even with an IQ of 6000, it’s still brown-trousers time.
I had to look that term up. Basically people who believe that only they matter and that all forms of government (except in some cases ultra-local) are illegitimate. Randians. People who think they see a utopia, but whose ideas in practice would lead to the collapse of civilisation.
As for algorithms to identify faces and emotions, I am happy to see them fail. I do not want to live in a panopticon.