Quite a lot to unpick here so I've warmed up the waffle iron.i'm with Drago on this one. As for the Trolley problem, it's a red herring. If the vehicle is not put into scenario where there is an unavoidable collision it is a non-problem.
Why would it not be in that position? It's not human. It has multiple inputs - visual, IR, etc. It can see further. It can use prediction modelling to work out what is going on. It has reaction times vastly faster than a human. It can talk to other vehicles which are also autonomous. The more cars you have talking to each other, the more information the system can have about danger vectors. The development scenario is for the car to take action *before* the scenario happens.
The trolley problem is a binary choice. Autonomous AIs will never have a binary choice and all AI development is around the car learning to read the world around it using its enhanced senses. That's why it is still some time away. Tesla have the biggest archive of input data in the world from their vehicles. Elon Musk has stated that Tesla will have Grade 5 capable vehicles by the end of next year. That's probably over optimistic, but by 5 years? I wouldn't bet on it - and once Tesla has Grade 5 autonomous vehicles expect networks of quick hire self-driving Tesla taxis to become commonplace.
Your central argument seems to rest on the theory that everyone on the same road as AVs are good model citizens and rational actors and that the system is infallible, but as the road network is neither a closed loop nor is anyone using it a rational actor, and unless you make wearing beacons mandatory for every person and animal then there will still be unexpected elements that the system will have to deal with on the fly.
I can think of several real world no-win situations that an AV cannot prevent but where the AV can alter the outcome by making a decision.
Here's just one, I can provide more if necessary.
You don't have to go far back to see a massive pileup on a motorway caused by people driving blindly into fog. Yes, an AV can see the obstructions through the fog and will react accordingly, but:
- what about the non AV car behind that has no way of receiving data from the AV's beacon and fails to slow down? The AV will certainly be aware of it
- what evasive manouevres will it take? If it's only going to be a minor collision probably none
- what if, in this scenario, it's not a car but a truck behind the AV that fails to slow down, and the AV slowing down to avoid the crash ahead would cause the death of the occupants of the AV?
- what if the AV detects someone standing on the shoulder that the AV could otherwise use as an escape route?
These are not hypothetical scenarios but real world ones that AVs need to solve via computation and analysis rather than humans making split-second decisions with imperfect information and relatively terrible reaction times.
Also, any machine learning trained with real world data will pick up the unconscious biases of the people who provide the training data. There's a lot of study into this. In particular. Look up computer vision and racism.
Do you really want life-or-death decisions to be made by an AI trained by a data set comprised mainly of those who drive Teslas?
If I'm going to be killed on the road I'd really rather it not be because of a techbro with terrible ethics and whose training methodology informed an AI that in the event of a no-win situation that I was expendable.