Monday, January 7, 2013

Robot ethics

"what we really need is a sound way to teach our machines to be ethical. The trouble is that we have almost no idea how to do that. Many discussions start with three famous laws from Isaac Asimov: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. A robot must protect its own existence as long as such protection does not conflict with the first or second laws. The trouble with these seemingly sound laws is threefold. The first is technical: at least for now, we couldn’t program a machine with Asimov’s laws if we tried..."

Google’s Driver-less Car and Morality : The New Yorker

Here and Now 1.7.13

No comments:

Post a Comment