I think GC is wrong and wearing a tinfoil helmet. But I've got a massive hangover, so I can't be bothered to take his post apart.What you are describing is not, as GC points out, how we got to where we are now
I think GC is wrong and wearing a tinfoil helmet. But I've got a massive hangover, so I can't be bothered to take his post apart.
I think GC is wrong and wearing a tinfoil helmet. But I've got a massive hangover, so I can't be bothered to take his post apart.
For my sins, I used to be a software developer in a previous career. There are two things you need to know about software development, and software developers.
The first is that they actually aren't much good at edge cases. And this is a problem, because cyclists are very much an edge case. Especially so to those software developers in the US. Vulnerable road users may a more familiar concept to the German software developers writing the code for autonomous vehicles. So we could conceivablly find that the most courteously operated vehicles in our autonomous future are German, in a deliciously ironic reversal of the present perception.
But no autonomous vehicle can operate any better than its software. And if the developer decides that overtaking a cyclist with 10 cm to spare at speed is perfectly acceptable, that's exactly what will happen. It's even quite conceivable that the sensors detecting the presence of a cyclist or pedestrian will be removed as noise by some digital signal processing algorithm. The bottom line is that the software for these thiungs will written by fallible humans, so that software will in turn be fallible. And complex systems fail in surprising ways...
The second is that the path of least resistance is invariably the preferred choice. That is at least partly the reason behind why we spend so much of our time fighting recalcitrant applications - it is so much easier for the developers to force the users to adapt their ways to the software than it is for them to learn how people actually want to use it.
Legislating vulnerable road users off the road is very much the path of least resistance: you can expect that there will be much lobbying by the motoring lobby for exactly that to happen: "for their own good", naturally. After all, it will be rather difficult to sell an autonomous vehicle if it cedes priority to every lowly pedestrian or cyclist, won't it? It is possible to make this prediction because this is exactly what happened in the US, with the passing of jaywalking laws. The motoring lobby is large and well funded - it is unwise to ignore it.
For my sins, I used to be a software developer in a previous career. There are two things you need to know about software development, and software developers.
The first is [...]
The second is [...]
Way too sensible a post for here!
For my sins, I used to be a software developer in a previous career. There are two things you need to know about software development, and software developers.
The first is that they actually aren't much good at edge cases. And this is a problem, because cyclists are very much an edge case. Especially so to those software developers in the US. Vulnerable road users may a more familiar concept to the German software developers writing the code for autonomous vehicles. So we could conceivablly find that the most courteously operated vehicles in our autonomous future are German, in a deliciously ironic reversal of the present perception.
But no autonomous vehicle can operate any better than its software. And if the developer decides that overtaking a cyclist with 10 cm to spare at speed is perfectly acceptable, that's exactly what will happen. It's even quite conceivable that the sensors detecting the presence of a cyclist or pedestrian will be removed as noise by some digital signal processing algorithm. The bottom line is that the software for these thiungs will written by fallible humans, so that software will in turn be fallible. And complex systems fail in surprising ways...
The second is that the path of least resistance is invariably the preferred choice. That is at least partly the reason behind why we spend so much of our time fighting recalcitrant applications - it is so much easier for the developers to force the users to adapt their ways to the software than it is for them to learn how people actually want to use it.
Legislating vulnerable road users off the road is very much the path of least resistance: you can expect that there will be much lobbying by the motoring lobby for exactly that to happen: "for their own good", naturally. After all, it will be rather difficult to sell an autonomous vehicle if it cedes priority to every lowly pedestrian or cyclist, won't it? It is possible to make this prediction because this is exactly what happened in the US, with the passing of jaywalking laws. The motoring lobby is large and well funded - it is unwise to ignore it.
'Tricky moral question' my arse.There are higher level issues than software bugs (although based on my Qashqai there's some way to go). We're now getting into the bizarre world of machine ethics where a vehicle may be presented with scenarios where it would have to make what in human terms is a moral choice. Mercedes have popped in an out of the news with this story. I'm sure the reporting has been through several sensation-enhancing filters, but the core dilemma is still there. It's all very 'I Robot'!
I agree with you for the car-centric question in the linked article. If these things become commonplace, I'd like to see a law that said the source of the danger - the car and it's occupants - bore the brunt of any consequences. Passing a law and then persuading the AI to follow it are another thing though.'Tricky moral question' my arse.
Way too sensible a post for here!
You do echo some of my concerns, but put them far better than I could hope to do.
I agree with you for the car-centric question in the linked article. If these things become commonplace, I'd like to see a law that said the source of the danger - the car and it's occupants - bore the brunt of any consequences. Passing a law and then persuading the AI to follow it are another thing though.
The properly tricky questions come when you consider choices that only have consequences for non-occupants. You can stage the scenarios for yourself, but how would the vehicle determine the 'best' outcome if each choice is likely to lead to at least some death or injury? The reference to Asimov is relevant because his Robot books deal with exactly this, even if they're dated technologically. Sooner or later we're going to need robot ethicists.
Why are we discussing self driving software on page 97 (!?!) of a thread about a collision that didn't even involve a car.
Can I suggest someone kick a new thread if they want to go into detail about this issue?