The sex robot community—the people who make the sex robots, and the people who want to have sex with the sex robots—suffered a blow this past week, when the Houston City Council voted to preemptively ban what would’ve been the first sex robot “brothel” in the U.S. But even those council members must know that their gesture was futile. Soon the stigma will fade, and Wal-Mart will sell these things in sixty different flavors. Which of course means that, sometime in the future, you’ll almost certainly be able to buy a BDSM robot.

I love this opening, from a Gizmodo article; it cuts right to the assumption that, like it or not, we’ll soon have robots in our lives, and they’ll be able to do lots of naughty things.  The article basically asks the question: Would a BDSM robot violate Asimov’s Three Laws?  The implication being that BDSM, or other behaviors of a sexual (or non-sexual) nature, can be difficult to categorize in areas of pain, hazard and permission, and thereby might constitute unacceptable dangers to humans.  Then it asks a number of experts to weigh in on the question.

Personally, I think this is funny… because, at this moment, and in the forseeable future, Asimov’s Three Laws don’t exist… and come to think of it, they never really worked.  

NAO, a clumsy and limited humanoid robotFor the record, science fiction author Isaac Asimov’s Three Laws of Robotics are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws were absolutes designed for a 20th century reality when robots and other machines were perceived as nothing more than blunt instruments.  Isaac Asimov wrote his Three Laws in order to suggest the concept of a safe and obedient robot… which he could then logically challenge in his I, Robot short stories, and later his R. Daneel Olivaw stories.  He repeatedly showed how the Three Laws could be twisted, subverted and misused, then presented humans (or other robots) clever enough to find logical arguments designed to reset the laws and rein in errant robots.  His stories were so successful and popular that his Three Laws became default science fiction lore… even though Asimov himself had repeatedly demonstrated that the idea had serious functional loopholes, some of which could still result in human harm, or totally incapacitate a robot for no good reason.

And even without logical fallacies, the Three Laws were often used as concepts compromised by man-made bad programming, data mis-information or just a story-driven act-of-god glitch in the system.  Robots, it seemed, were always easy to malfunction, confuse or just plain break… thereby throwing the Three Laws right out the window.

I think we’ve lived in this world—whose real law of physics seems to be based around Murphy’s Law—long enough to see that the idea of Asimov’s laws being bulletproof is pretty hilarious.  They were no more perfect than the robots they governed.

robot loveBut the day when we will need programming for independent robots is coming.  The fact that we’re actually, seriously discussing sex robots today is proof enough that robots are evolving well past the blunt instrument stage.  And hopefully by the time we’re really ready to try to program sentient rules and values into robots, we will have realized how archaic and amusing Asimov’s laws truly are.  Maybe by then, some other writer—or, better yet, actual scientists—will have posited a new set of 21st century laws, based on the practicalities of the real world and designed to be flexible, considerate and honestly pro-life.

I say a modern set of laws should absolutely start with no less basic a concept than the original Hippocratic oath: “Do No Harm.”  This First Law requires a healthy subset of rules and guidelines designed to help a robot through the physical and even the psychological nuances of “harm”… easily the most challenging area for any being to navigate.  That area would by necessity require a good understanding of emotion as well, and with the superior senses of a robot, they should be able to more easily detect and understand the emotions displayed around them.  If this law can be properly written, the rest should be child’s play.  (And if you really want a sex robot, you’d better make sure this part works nigh-flawlessly.)

Baymax the caregiver robotOnce that first series of laws is set, we can start thinking about whether we can write a law that requires obedience to humans… or whether we should be doing that at all.  After all, humans don’t exactly have the best track record when it comes to ordering entire races around.  Instead of trying to create a race of slaves, we should be creating machines that see a value in applying their skills in support and cooperation with humans… they should be companions whose presence enriches our lives and their existence.  “Be a Best Friend” may be the best way to put it.  A best friend will support you and help you, do things for you whenever possible, and—if it’s a good friend—try to protect you and keep you from doing unwise or unsafe things.

And if I was to cap these off, I would add a law that would make sure the robot saw the big picture… the impact of its actions (and those of its best friends) on the community at large, the collective legality and morality of society, and the future.  “Be a Citizen” might cover this, though it’s understandably vague; it’s a vague and often abstract area to define.  But hopefully it would be enough to make sure a robot wasn’t helping or encouraging a human to do anything immoral, illegal or dangerous, to himself or to others.

So, to recap, my updated Three Laws of Robotics:

  • Do No Harm;
  • Be a Best Friend; and
  • Be a Citizen.

a robot and a woman converse at a tableA 21st century set of rules for robots that can walk beside us, not just behind us, and one that might take us together into the next century.  These laws may not be perfect; but fortunately, we still have time to work out the kinks before it comes time to apply them.  Because, as Asimov demonstrated, there can be a great chasm between the laws we write… and the reality we get.