The Uber incident was actually a result of
HUMAN ERROR. The car 'saw' the pedestrian via 2 different sensors
BEFORE the human operator tried to take any kind of avoiding action. The reason the car didn't brake by it self was because Uber had
DISABLED auto braking in the belief the human operator would be better at judging edge cases than the software, sadly once again human fallibility was shown up.
https://www.theguardian.com/technology/2018/may/24/emergency-brake-was-disabled-on-self-driving-uber-that-killed-woman#:~:text=A federal investigation into a,emergency braking system was disabled.&text=The car was traveling at,impact, according to the report.
I work with computer algorithms all the time, algorithms don't make mistakes ever but humans do all the time. Software 'crashes' because the code is written by humans using human language so that we can understand them but its by no means the most efficient way to code for a computer. Current AI Neural Nets program/code themselves, they are 'black boxes' where we (humans) have no idea how the algorithm has been written inorder to achieve the outcome needed. The realisation that AI Neural Networks generate better code without human input was one of the biggest steps fowards in recent years. The next step is if AI Neural Networks can 'think' of new patterns/pathways rather than just be superhuman at identifying patterns based on historical data - at that point, we really will be going into the unknown, and its coming much quicker than people think.
https://www.nature.com/articles/nature24270
WHEN and its a
BIG WHEN, AI development is good enough to take over driving, I will have zero worries about trusting the code..........Will it depends on how much Arine you watched back in the 1990s