Thursday, September 30, 2010

I, Robot


I, Robot by Isaac Asimov is compiled of several short stories with one similar character in the form of a robopsychologist, Susan Calvin. It subtly implies robots can make choices and put the lives of humans in their own hands. Eventually, the robots will have greater memory functions than humans and will be able to handle any situation unless the harm or death of a human comes into play. Would the world be a better place if we all followed the three laws of robotics Asimov created for this novel?
  1. One may not injure a human being or, through inaction, allow a human being to come to harm.
  2. One must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. One must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, it is easier for robots to follow these laws than it ever would be for humans. They have it programmed into their systems. Technically, a moral human being would follow these as well, but we still have things like wars in the world because there isn't something strong enough embedded into all of our systems to tell us not to harm humans.

These stories about robots and their interactions with humans gives you a lot to think about. How would you handle a situation where there was a mind-reading robot that messed with your mind because he didn't know better? Or if there was a robot who believed a piece of machinery created him and served this machine as a master instead of following your orders? Is this our future if we continue our research in robotics?

No comments:

Post a Comment