Franken-algorithms: the deadly consequences of unpredictable code


Growing concerns for consumers. When computer error gives a whole new meaning to “Blue screen of death”.

4 Likes

Thanks David, one of the best articles I’ve read on the algorithms topic.
I particularly liked:
“If computers appear to be performing magic, it’s because they are fast, not intelligent.” (I’ll have to use that one) and that there should be “algorithmic audits of any systems directly affecting the public”. That, I believe, is paramount.

2 Likes

It sure does. My mind jumped to another common phrase. Developers are usually constrained from ‘live testing’, that is using a production system to test their code. The reason is that a failure may be extremely harmful to the organisation, in the end costing large sums of money. In the example of the self-driving Volvo that ran down the cyclist they seemed to be doing just that. The result was death. The usual way around this is to test on a copy of the production system with simulated users and no real cash at stake. How do you do that effectively with a car on a road system?

I thought the author got confused saying on the one hand that audits should be done but then giving reasons why code audits may not find problems within a reasonable time or at all. What form would such an audit take? If the complexity prevents you from peeking inside to predict what happens next you have to do a black box validation. That is you systematically go through combinations of inputs and verify that the expected output takes place without knowing what happens inside. Isn’t that what the Volvo was doing?

Aside from the huge complexity of the code producing uncertain outcomes and doubtful verifiability there is another factor that was hinted at in the stock market trading example but not named, that is non-linearity. Your system doesn’t have to be very complex before, if the mathematics are suitable, chaotic or unpredictable behaviour sets in. Given that stock markets are already unstable systems that tend towards self-destruction via the boom-bust cycle the idea of having algorithms trading with huge amounts of money at lightning speed fills me with dread.

2 Likes

This was a Uber controlled test vehicle which happened to be a Volvo and not a Volvo company self driving vehicle.

My understanding is Volvo is taking a different approach to most manufacturers where is carrying out LIDAR of motorways and major roads where self driving will be possible. Listening to one of the Volvo engineers on late night radio, they believe that full autonomy is a long way off as it is very difficult to code for all road conditions and driver scenarios.

3 Likes

Thanks for the clarification. I don’t think it makes any real difference who was responsible.

4 Likes

My lawyer disagrees. :wink:

3 Likes

Had me at the tag-line. For the longest time, I had the impression we never fully understood the universe, neigh never even really came close - and probably never would … Now it seems we ‘did’ - but no longer do, thanks to ‘code piled on code’ … who knew?

3 Likes

Given an infinite amount of time even a human brain will still get many things wrong! Machines only work reliably in a predictable world.

Put automation in a world subject to variability, uncertainty or unpredictable events they will fail. As do humans! Of course humans fail in more predictable circumstances. Self drive vehicles should do better under such tests.

How would a self drive vehicle respond to an errant bicycle rider suddenly falling across it’s path? How much room does the vehicle need to give a bicycle to allow for any errant move ands how much must it slow to have time to respond? With an unexpected event there will always be scope for a legal argument it was the machine automation that was at fault due to poor programming or sensor failure. It might be counter argued the bicycle veered unpredictably in front of the vehicle. Really? Nothing has changed, except as the non driving owner of the vehicle are you off the hook here regardless?

As a consumer is it possible laws will be updated with the assistance of the manufacturers to make it next to impossible to hold automation to account?

Perhaps more likely is a future in which severe restrictions are applied to self drive or autonomous vehicles other than when they operate in restricted pathways reserved for only autonomous vehicles. No room for self drive, bicycles or motor bikes.

In that instance it will be automation coder vs automation coder, unless government restricts all vehicles to the one supplier. A bit like a Windows only or MAC only world. It won’t fix the issue of errors in code though as we know all too well from examples of these two code giants. Sorry I left out Intel and it’s secret stash of hardware snafus.

1 Like

If you are sheeting home responsibility for a specific incident then yes. If you are talking about the broader problems in principle with controlling the outcomes from complex code that may affect human welfare, which was the topic of the original article, then no.

There is always the defence that the product was not used in accordance with the intended purpose and or instructions. Of course it matters!

We need someone or something to blame, it’s in the genes isn’t it?

My first thoughts on seeing the original post were:

  1. BSOD+ self drive car=Blue Bumper Of Death ?
  2. Give how we can’t even always reliably know the outcomes of piled on layers of logic we DO write, wait until decent complexity neural networks enter the mix - even their developers often don’t really know how they arrive at their answers.
1 Like

In your posted context it was between Volvo and Uber, not between the misguided computer scientists, coders, and so on. You quoted @phb as the premise of your comment, This was a Uber controlled test vehicle which happened to be a Volvo and not a Volvo company self driving vehicle. and concluded

You perhaps need to tighten up your debate skills, or at least not take everything so seriously, or in this case, work on recognising comebacks to unintended straight lines when you read them.

On the other hand:

:wink:

1 Like

Also leaves more room for all of us to ride our bicycles or horses?

And Solves the AI problem at the same time.

Some dude, CEO, Claims … yeah, another idiot. Seriously, there are only two things sadder than these clowns, the people who turn it into news, and the people who drink said news up as fact.

2 Likes

“drink” or “think”? Either way.

“B***er” - See what they are saying about unintended consequences of unpredictable code. The effects are every where.
I was just about to dust off the bike and have now put it back in the shed. My neighbour is busy with the horse and a saddle. I don’t know how I will break the news to her?

I don’t see that I said anything at all about Volvo vs Uber, if you thought so let me be clear that was not my intention. My focus was on the topic of the article, the perils of the algorithm grown beyond human control or understanding. I used Volvo as a label for the vehicle concerned and it has now been pointed out that Volvo Pty Ltd as an entity was not responsible - fine. I could just as easily have said “the car” the fact that it happened to be a Volvo is not important.

I’m sorry if I didn’t get your joke but that happens, this is an imperfect medium.

Where is the commentary about software testing, chaos and non-linearity, and putting robots in charge of billions?

There is a direction being researched into AI that offers a far better hope than the current “unexplainable” decision making. This newer approach works much more like a human. By this I mean it learns the process much like we do and when it does it can then explain why it made the decision much like we can discuss why we made decisions.

Currently AI in the classic sense is either rules based programming ie if red then stop (of course it will have lots of branches but each is a yes/no answer) or it has a very large programmed dataset. The larger the conditional branching or the larger the dataset the better AI seems to get the correct answer but neither of these in the true sense are “learning” nor can the machine be easily interrogated about how it arrived at a decision. In these more classic approaches if you want better decisions you either program more rules or you create much larger datasets and each requires humans inputting that new rule or data and much greater computing power to achieve results in a reasonable time.

The newer research path is looking at an answer much more like our neural networks where new pathways are made as more data is streaming in (much like how we learn colours, language, how to walk, new ways of doing things), while this still requires large computing power, the “rules” are evolving without “set” input from programmers. This is going to take a lot more research and that also means time to achieve any result that may be usable

I am not discussing “Deep Learning” here as that is still based on very large labelled datasets and a mathematical weighting to achieve answers. Because of that weighting system decisions can only be interpreted in terms of that maths (and the biases of that weighing) and as such cannot be explained. In an article on Wired’s site that looked at Deep Learning this was explained as “they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases”.

So for the near future AI is not going to be a great answer but it can be a good answer in lots of situations eg Alexa, Siri, facial recognition to unlock your phone but for really critical decisions the answer is not yet here.

3 Likes

AI and algorithms are very different… we are nowhere near the former ye but the latter is growing exponentially.

The other thing I found interesting about that article was a little comment near the beginning that, I think, is a key part of this discussion:
“Barred from taking evasive action on its own, the computer abruptly handed control back to its human master…”

This is all part of that argument… should the car protect the driver or the pedestrian? For most of us the moral compass demands we protect others but… yes the but… what if I have kids in my car etc etc. In the case of the driver in the article, the car was “banned” from making the decision, the driver made the decision to allow the algorithm to control the vehicle but when reality hit the vehicle followed it’s programming and handed the control back. Who’s to blame? Volvo? Uber? The driver? The pedestrian? The law? etc etc.

I predict that a lot of lawyers are going to make a hell of a lot of money out of all of this :expressionless:

2 Likes

A sudden hand back of control to the driver is adding one more risk or delay while the driver assesses two or more concurrent inputs. IE What is the car asking me to do? And what is that I am supposed to see? Ah now I see it what should I do next? Of course some of us may not be on the ball at that instant. Others may be confused at getting two simultaneous inputs having seen the impending disaster in front and wondering why the car is also making a screeching alarm. Frozen fear and shock is a common reaction when some of us are overwhelmed.

Should have done law?

2 Likes