I found this video extremely interesting even thought it is somewhat off-topic.
DQs:
1. Who is responsible for a death caused by technology? Is it pre-meditated murder because of a pre-set algorithm? If so, who is at fault?
2. Does a human's spontaneous reaction in this given scenario absolve them in whatever consequence?
Someone has to program the cars' ethical preferences. It'll be on us, not them, just as it is now. I for one will feel MUCH safer on I24 when the (non-organic) machines are in control.
ReplyDeleteI would agree that the cars' ethics have to be programmed by humans. However, that does bring up trying to program the trolley dilemma into a machine. I'm a little uneasy about self-driving cars anyway, though. I don't like the idea that my car could be hacked and controlled. Even my current vehicle has more electronics than I'm comfortable with. Maybe I'm just a control freak.
DeleteI agree, self driving vehicles would make me extremely uncomfortable. With all of the uncertainties and unexpected eventa that could happen, I would not trust a self driving vehicle to be able to adjust to every one of them.
DeleteHave you done a daily interstate commute lately?
DeleteWho is responsible for a death caused by technology? Is it pre-meditated murder because of a pre-set algorithm? If so, who is at fault?
ReplyDeleteI think if the programmers set a specific priority list of who to save in specific scenarios, then it would become unethical. Also, if there were cars programmed to save the driver in addition to cars that were programmed to save as many people as possible, I think the price differential between them would become great enough for the protective cars to be a privilege of the rich. Therefore, I think randomization in very specific scenarios would be the most ethical, and would remove the responsibility from the programmers.