presta
Guru
The German Ministry of Transport were one of the first, if not the first, to publish a report on the ethics of autonomous cars. It's particularly interesting to see it contradicting itself from one paragraph to the next.
"The protection of individuals takes precedence over all other utilitarian considerations."
So abandon all motor transport then.
"a balance is struck between maximum personal freedom of choice in a general regime of development and the freedom of others and their safety"
Oh dear, that didn't last long, did it.
"the technology must be designed in such a way that critical situations do not arise in the first place. These include dilemma situations, in other words a situation in which an automated vehicle has to “decide” which of two evils"
Good luck with that one.
"preventing hazards by means of “intelligent” road infrastructure – should be used"
It's not used for human drivers.
"the protection of human life enjoys top priority in a balancing of legally protected interests"
Oh look, that one's back again. Presumably this balance doesn't involve anything such as a moral dilemma?
Wait a minute though:
"Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behaviour by parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable."
So that's clear then. Everyone got that?
"any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited"
Women & children not first.
"General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties."
So what happens when the parties involved in generating the risk outnumber those non-involved?
"the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions"
...and not the party responsible for the decision to buy a car, and travel?
"Liability for damage caused by activated automated driving systems is governed by the same principles as in other product liability"
What about liability for the decision to activate the automated driving system?...or not:
"Is there an ethical obligation on the driver not to drive himself if this contributes towards enhancing safety?"
Data protection:
"Learning systems that are self-learning in vehicle operation and their connection to central scenario databases may be ethically allowed if, and to the extent that, they generate safety gains"
but:
"It is the vehicle keepers and vehicle users who decide whether their vehicle data that are generated are to be forwarded and used"
So how do you operate a self-learning central database if the car owner has a veto over what data it can collect?
"The driver of a car is driving along a road on a hillside. The highly automated car detects several children playing on the road. The driver of a manual vehicle would now have the choice of taking his own life by driving over the cliff or risking the death of the children by heading towards the children playing in the road environment. In the case of a highly automated car, the programmer or the self-learning machine would have to decide what should be done in this situation.
"The problem associated with the decision to be taken by the programmer is that he might take the „correct“ ethical decision for the human in conformity with the basic consensus but this decision remains an external decision which, moreover, does not intuitively capture a specific situation (with all the benefits and drawbacks of intuitive/situational behavioural control) but has to appraise a situation in abstract/general terms. In the case of an intuitive decision, the individual (in this case the driver) will either accept the risk of his own death or not.
"Ultimately, therefore, the programmer or machine would, in extremis, be able to take correct ethical decisions on the demise of the individual human being. Taken to its logical conclusion, humans would, in existential life-or-death situations, no longer be autonomous but heteronomous.
"This conclusion is problematic in many respects. On the one hand, there is the danger of the state acting in a very paternalistic manner and prescribing a „correct“ ethical course of action (to the extent that the programming prescribes this). On the other hand, this would be antithetical to the value system of humanism, in which the individual is at the centre of all considerations. A development of this nature thus has to be viewed critically."
Bear in mind that moral dilemmas have already been pre-programmed out, and the age of potential victims is not allowed to matter.
"What is problematical about dilemma situations is that they involve decisions that have to be taken from out of a specific individual case and considering various factors"
No sh!t Sherlock.
"It is not possible to systematically comply with the premise of minimizing personal injury unless an assessment of the impact of damage to property is attempted and possible resultant personal injury is factored into the behaviour in dilemma situations"
So it's not simple after all.
"As long as the prior programming minimizes the risks to everyone in the same manner, it was also in the interests of those sacrificed before they were identifiable as such in a specific situation"
Some are identifiable as more vulnerable than others before the programming takes place.
"the Ethics Commission refuses to infer from this that the lives of humans can be „offset“ against those of other humans in emergency situations so that it could be permissible to sacrifice one person in order to save several others"
"it would appear reasonable to demand that the course of action to be chosen is that which costs as few human lives as possible"
"Here, the Commission has not yet been able to bring its discussions to a satisfactory end, nor has it been able to reach a consensus"
"It would not be compatible with this guiding principle if we were to impose on an individual, who is established in advance in his role of driver or user of a motor vehicle, obligations of solidarity with others in emergencies, including sacrificing his own life."
" those involved in mobility risks must not sacrifice those who are not involved"
"There is no ethical rule that always places safety before freedom"
"Accountability for driverless systems that are being used for their intended purposes lies with the manufacturer and operator"
What if the manufacturer says "The auto must only be used when the conditions are suitable"?
I think one of the main effects of autonomous cars might be to show the human race what a mucking fuddle they get into when they try to explicitly define ethical values. People can and do live with contradictory values and beliefs, but if computer software is programmed with self-contradictory instructions it'll crash (along with the car that it's controlling).
"The protection of individuals takes precedence over all other utilitarian considerations."
So abandon all motor transport then.
"a balance is struck between maximum personal freedom of choice in a general regime of development and the freedom of others and their safety"
Oh dear, that didn't last long, did it.
"the technology must be designed in such a way that critical situations do not arise in the first place. These include dilemma situations, in other words a situation in which an automated vehicle has to “decide” which of two evils"
Good luck with that one.
"preventing hazards by means of “intelligent” road infrastructure – should be used"
It's not used for human drivers.
"the protection of human life enjoys top priority in a balancing of legally protected interests"
Oh look, that one's back again. Presumably this balance doesn't involve anything such as a moral dilemma?
Wait a minute though:
"Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behaviour by parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable."
So that's clear then. Everyone got that?
"any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited"
Women & children not first.
"General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties."
So what happens when the parties involved in generating the risk outnumber those non-involved?
"the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions"
...and not the party responsible for the decision to buy a car, and travel?
"Liability for damage caused by activated automated driving systems is governed by the same principles as in other product liability"
What about liability for the decision to activate the automated driving system?...or not:
"Is there an ethical obligation on the driver not to drive himself if this contributes towards enhancing safety?"
Data protection:
"Learning systems that are self-learning in vehicle operation and their connection to central scenario databases may be ethically allowed if, and to the extent that, they generate safety gains"
but:
"It is the vehicle keepers and vehicle users who decide whether their vehicle data that are generated are to be forwarded and used"
So how do you operate a self-learning central database if the car owner has a veto over what data it can collect?
"The driver of a car is driving along a road on a hillside. The highly automated car detects several children playing on the road. The driver of a manual vehicle would now have the choice of taking his own life by driving over the cliff or risking the death of the children by heading towards the children playing in the road environment. In the case of a highly automated car, the programmer or the self-learning machine would have to decide what should be done in this situation.
"The problem associated with the decision to be taken by the programmer is that he might take the „correct“ ethical decision for the human in conformity with the basic consensus but this decision remains an external decision which, moreover, does not intuitively capture a specific situation (with all the benefits and drawbacks of intuitive/situational behavioural control) but has to appraise a situation in abstract/general terms. In the case of an intuitive decision, the individual (in this case the driver) will either accept the risk of his own death or not.
"Ultimately, therefore, the programmer or machine would, in extremis, be able to take correct ethical decisions on the demise of the individual human being. Taken to its logical conclusion, humans would, in existential life-or-death situations, no longer be autonomous but heteronomous.
"This conclusion is problematic in many respects. On the one hand, there is the danger of the state acting in a very paternalistic manner and prescribing a „correct“ ethical course of action (to the extent that the programming prescribes this). On the other hand, this would be antithetical to the value system of humanism, in which the individual is at the centre of all considerations. A development of this nature thus has to be viewed critically."
Bear in mind that moral dilemmas have already been pre-programmed out, and the age of potential victims is not allowed to matter.
"What is problematical about dilemma situations is that they involve decisions that have to be taken from out of a specific individual case and considering various factors"
No sh!t Sherlock.
"It is not possible to systematically comply with the premise of minimizing personal injury unless an assessment of the impact of damage to property is attempted and possible resultant personal injury is factored into the behaviour in dilemma situations"
So it's not simple after all.
"As long as the prior programming minimizes the risks to everyone in the same manner, it was also in the interests of those sacrificed before they were identifiable as such in a specific situation"
Some are identifiable as more vulnerable than others before the programming takes place.
"the Ethics Commission refuses to infer from this that the lives of humans can be „offset“ against those of other humans in emergency situations so that it could be permissible to sacrifice one person in order to save several others"
"it would appear reasonable to demand that the course of action to be chosen is that which costs as few human lives as possible"
"Here, the Commission has not yet been able to bring its discussions to a satisfactory end, nor has it been able to reach a consensus"
"It would not be compatible with this guiding principle if we were to impose on an individual, who is established in advance in his role of driver or user of a motor vehicle, obligations of solidarity with others in emergencies, including sacrificing his own life."
" those involved in mobility risks must not sacrifice those who are not involved"
"There is no ethical rule that always places safety before freedom"
"Accountability for driverless systems that are being used for their intended purposes lies with the manufacturer and operator"
What if the manufacturer says "The auto must only be used when the conditions are suitable"?
I think one of the main effects of autonomous cars might be to show the human race what a mucking fuddle they get into when they try to explicitly define ethical values. People can and do live with contradictory values and beliefs, but if computer software is programmed with self-contradictory instructions it'll crash (along with the car that it's controlling).
If I had a car that did that to me I'd be asking for my money back.A friend recently bought a Honda Jazz. Every time she starts the engine she has to remember to turn off the automated 'safetly' assistance that tries to take over the driving and plough her into hedges (rural area). They have had a few near misses. It cannot be adjusted in the software.