Up@dawn 2.0

Wednesday, January 30, 2013

The Ethics of Conscious Existence

[by William Phillips] Glenn McGee reveals in his book, "Bioethics for Beginners", that the South Korean government has begun taking precautionary measures to instate ethical boundaries for synthetic life. The question we, in group 4, deliberated on was: At what point do these boundaries become legitimate? The answer primarily depends on the possibility of singularity, which is based upon the evolution of synthetic life, specifically robotic and mechanical,  into conscious being. We came to an agreement that intelligence did not beget anima, or conscious, which then implies a collective belief in the "soul" or "spirit." On the same note, since we had agreed on the implausibility of conscious, we instead questioned the idea of use and integration of synthetic beings, who are used as tools, into our society. This brought up the questions of do they pay taxes, could you marry one of these beings, where do they have citizenship, does the creator's bias in making the individual lead to a bias within the individual, and could that bias lead to problems when adapting said individual to a different set of customs. We also spoke of the usage of drones and robotic infantry within the military, and how that could possibly lead to more civilian attacks due to the enemy being aware of the lack of risks or damage done by destroying machines.

However, we all agreed that these things were far into the future, if not seemingly impossible, and that there were more pertinent issues at the moment than the rights of mechanically engineered tools.


  1. The social implications of integrating any sort of human replica into society interested me particularly. As William said, we mentioned possible marriage of human and android which begged the question, 'would we as humans want a partner, or even a friend who was in complete agreement with everything we wanted them to say and do?' I suppose this ventures into sociological territory more so than philosophy, but I think it's an interesting question to ask.

  2. One thought that keeps reoccurring for me when I think about this issue is as William pointed out these types of technology seem very far in the future or even not possible. I personally do not believe that it is possible to engineer a machine that would possess a conscious or will and there for is nothing more than a machine lacking the ability think or feel beyond its programming. All of its responses are not based on what it feels or believes but rather on how it was programed to respond. When we look at the discussion of slavery or marriage I would propose that this would be no different than questioning the ethics of looking at your computer or cellphone as your slave or perhaps your spouse.

  3. Something that I found interesting, and even somewhat silly, was the fact that the "code of ethics" that were going to be created for the robots were rules that mimicked those written in a science-fiction book, "Runaround." For me personally, this makes it hard to take the idea of having robots being part of our every day life seriously.

    I believe that every emotion, response, physical activity and task that a robot portrays or performs comes from what is programmed inside of it. I do not believe we will ever be able to program robots to have a conscience, soul, or thoughts. When it comes to the issue of abusing robots as slaves, I think that is merely what they are created for. Not the abuse, but for completing tasks for humans. Yes, some may eventually look to robots for companionship and company, but currently robots already exist that some may consider slaves. For example: My Roomba may just be a vacuuming robot, but when a Roomba is made to vacuum, there is nothing the robot can do to refuse the task.

    I think that eventually robots may appear to be able to interact with people, but just like the robots today such as the infamous Siri, it will all come from programming.

  4. Having talked to a previous genetics teacher today, I have an intriguing question. At what point do we switch our ethical debates from conscience robots to the never ending industry of integrating ourselves (that with a conscience/soul) with robots? The near future may not have a infinite robot but how much of ourselves can we alter or integrate before we become all but robotic? This, again, questions the very definition of human versus robot. I would be intrigued to hear what a trans-humanist would deject from this (William)?