The ethics of autonomous vehicles

Page may contain affiliate links. Please see terms for details.
OP
OP
Ming the Merciless

Ming the Merciless

There is no mercy
Location
Inside my skull
The scenarios I was asked to judge were no win scenarios. One group or the other would die.

I did not assign much importance to saving the passengers
I had no gender bias
The most killed was an elderly chap with walking stick (sorry)
I killed criminals instead of workers but how a self driving car would know this I know not.

It was interesting that they did not consider that some of the scenarios can be engineered out if they wish to protect vulnerable road users. I considered that the person who introduced the danger should be sacrificed if that danger was realised. Every time there was that option I took it.
 

mjr

Comfy armchair to one person & a plank to the next
They can pretty much all be avoided simply by slowing vehicles down dramatically.
Or have it self destruct if it does still manage to find its way into a no win situation.
 

winjim

Smash the cistern
Interesting that I had 100% bias towards saving females and 100% bias toward saving people of high social value when the set of rules I was using were very simple* and didn't take into account gender or social value.

So the test is bunk.

*Save humans before animals. Save pedestrians before passengers. Maintain course.
 
  • Like
Reactions: mjr

hatler

Guru
I simply went for leaving the car going on the course it was going on the grounds that if it tried to swerve that would catch that set of people by surprise. This resulted in a reported extreme bias on a couple of my choices (upholding the law, age bias and species preference).
 

lazybloke

Priest of the cult of Chris Rea
Location
Leafy Surrey
The scenarios are unrealistic, as are the limited choice of actions. How can this data provide anything useful for the programming of self-driving cars?
 

PK99

Legendary Member
Location
SW19
The scenarios I was asked to judge were no win scenarios. One group or the other would die.

It was interesting that they did not consider that some of the scenarios can be engineered out if they wish to protect vulnerable road users. I considered that the person who introduced the danger should be sacrificed if that danger was realised. Every time there was that option I took it.

The scenarios are unrealistic, as are the limited choice of actions. How can this data provide anything useful for the programming of self-driving cars?

You miss the point of Trolleyology (no i did not just make that word up, google it) - the intention is to explore YOUR moral choices in closely defined situations removing the white noise of real world choices.

Saved you the google.....

https://www.psychologytoday.com/blog/is-america/201401/trolleyology
 
Top Bottom