Franken-algorithms: the deadly consequences of unpredictable code

Probably not. Autonomous vehicles change the cost-benefit equation. Owning a vehicle makes less economic sense, but travel becomes less costly. People will be able to afford to travel more, so there’ll probably be more vehicles on the roads.

Quite a few, actually. It’s almost accepted as a given.

A driving ban makes for a bonkers idea, but it’s gained momentum inside and outside the auto industry in recent years.

When the industries both manufacturing and scientific talk about AI they do talk about algorithms as part of the AI make up. What you perhaps see as AI is maybe different to how they discuss it. Where a machine makes decisions somewhat autonomously of human immediate input is what many of them term AI. What many others term AI is complete autonomy from “birth” and yet others see it in a mix of both complete and somewhat autonomous decision making. Even when we make decisions we run a risk assessment and that in pure terms could be a maths equation just not on a level we comprehend as such. It is one we make using our experiences and knowledge to weigh possible outcomes, but weigh we still do.

2 Likes

The driver was there for a reason. According to early reports, she neglected her duties.

Police say the woman was streaming The Voice for 43 minutes before the crash

Video taken from inside the vehicle shows Ms Vasquez looked down 204 times during the 1.6-kilometre journey, and only looked up about half-a-second before the Volvo, travelling at 70 kilometres an hour, hit Ms Herzberg.

… in their report, police said if she had been paying attention she could have reacted 43 metres before impact and brought the SUV to a stop about 12 metres before hitting Ms Herzberg.

Of course, much time has passed since then. I’ve no doubt that the reality is much less clear.

1 Like

A little clarity:

1 Like

Testing Frankencode has always interested me because absolute testing is essentially impossible. In the 1970’s ‘my work PC’ was a Control Data 175, a supercomputer of the day, and I was taking courses at night. The text was the classic Djikstra’s Discipline of Programming. It was and remains a book from hell with the best messages ever, that have rarely been taken to heart under the pressures of get it done and release it development standards.

Basically and simplistically every algorithm and function should have one entry, one exit, and be mathematically derived as well as mathematically or logically verifiable. Not quite so simple, but mostly that simple. During one memorable class the professor put a short loop on the board and asked the class (mostly professionals, all masters level) how the loop exited. Nobody got it. He explained it. We were embarrassed. I long forget the detail but retained the message.

A reality at the time was shown that to authoritatively test a basic multiplication of any two numbers in the CDC 175 CPU would require many months of 24x7, that being before any other operation or a program was introduced.

Hence what to test and how to test it becomes a field in its own right, and some get it much better than others, but none of it will be or can be authoritative in an absolute sense as a practical matter.

Efforts have commenced in the last decade or so to develop tools for formal verification that hardware and software are correct and bug-free. This is complicated, since even a 100 line BASIC program has plenty of opportunity to misbehave. When you have operating systems and web browsers with millions of lines of code, it becomes extremely difficult to review all the potential ‘break-points’ - when ‘break’ can mean anything from ‘it stops’ to ‘a clever hacker can use x feature, y feature and z feature together to create a means of subverting the code’ … when x, y and z can be scattered throughout a program, multiple programs and/or the hardware on which they run.

One (open source) project in this area is being run by Microsoft.

https://www.microsoft.com/en-us/research/blog/project-everest-advancing-the-science-of-program-proof/

There was a classic operating systems textbook from the 1970’s that included a story about the author making a program to grade student programs. His program could report the student program worked or that it did not work. One year his program reported ‘maybe’ for a single student program. His comment was that he never figured out how the student did it, but he had to get an ‘A’ as a result. In retrospect the student may have been the first successful hacker. :wink:

Anyway, the quest for the holy grail of perfect programs is as old as the first bug found in computing. The fallacy of achieving it is that it, in itself, is software that would require a recursive ability to find and call out bugs, including its own. Might happen someday, but probably not in a time frame meaningful to anyone reading this today.

2 Likes

I hear they are still working on a perpetual motion machine - and they are only two laws of thermodynamics away from success!

Back in the 70’s when I was eyeing off a career, an allegedly ‘wise’ guru in the industry told me not to bother, in the next 10 years computers would be programming themselves and the need for programmers would be minimal if any. I thought he was joking and laughed - wrong reaction - he genuinely believed it. While in the dim past I have written code that writes code in some fairly simple situations where we just needed such as a tool in the dev process, there are much more fancy and advanced examples of this now and buzzwords like AI, machine learning and deep learning might eventually get us closer - but ultimately I think it will be a joint press release when the people spruiking this think they have arrived, with the people who made the perpetual motion machine :wink:

3 Likes