What Are the 3 Laws of Robotics? And Do We Adhere to Them?

-

Contains paid links More Info

Our current state of technology, and robotics, in particular, has been foreseen by several science-fiction authors in the past. One such author was Isaac Asimov, who, in his works, defined the Three Laws of Robotics.

To him and to the audience of his time, all of that was science-fiction. Considering the stories taking place some decades ahead of us, today, they might not be too far apart from how things are developing with automation and robotics.

Asimov’s Laws of Robotics made sense in their time, and they still make sense today. Are we following them? Are we making use of this literature or are we blindly discarding it as a fairy tale? Please consider this article as an opinion. If you disagree with an aspect, I will welcome your thoughts below in the comment section.

Let’s take a look at these Laws of Robotics and whether or not we are taking them into account for the robotics of today.

The First Law

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

It is relevant for the creators to put restrict power if that power could destroy them. So the priority for robots would be not to harm a human, or let them come to harm in another way. Let’s think of an example.

A human is about to walk into a busy street junction, not paying attention to the passing vehicles. If a robot was to spot that, it would aid that person, preventing him or her from being hurt or even killed.

Robots Tin Man Woman Android Gynoid Asimov Laws Robotics Blue White Background

Are we accounting for this in reality? No, we are not. Automation has always been a part of the industry, but until now, not even robotic arms have the intelligence, not to hurt humans. They lack the ability to identify living beings due to missing or poorly designed sensors, and they don’t have the priority to follow this First Law of Robotics.

Accidents happen, and people can get injured or even killed by robots of today even if it’s just for an error in the system. There will always be errors in systems.

The Second Law

“A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

Makes sense to me and I’m sure it makes sense to you. No law and no program in the world does any good if the artificial entity can bypass commands.

Let’s jump back to our example scenario from above. If the robot’s owner commanded the robot not to help the endangered person, the robot would ignore the input and, regardless of the request, dash to save the human. So neither directly nor indirectly a robot could become a killer.

Are we implementing this into our AIs and robots today? No, we are not. You could argue about commands being carried out as part of their system theory and the IT workflow behind how they work, yet, the ethical background of this law has not been accounted for. If a machine controller would give the right input, direct or indirect damage to a human is at this state of technology a possibility.

The Third Law

“A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

This accounts for imitating one of the most primal instincts for organic creatures. Our primary priority is to survive at all cost. It makes sense to account for this law in robotics as well, for as long it’s priority is lesser than the First and Second Law of Robotics, as aforementioned.

Robot Projection Wall House Surface Outside Exterior AI Asimov Laws Robotics Blue Dark Building

Again, going back to the example of the person who is about to walk onto a busy street. A robot sees this person stepping into danger and facing the risk of being injured or killed. The robot calculates the best possible action, making sure both human and the robot itself remain unharmed. If the calculation allows for no such option, the robot will attempt to save this person, even though it might be destroyed in the process, being aware that its existence will cease to be.

Are we programming AIs like that? No, we are not. We are not giving our artificial creations a self-awareness, and in the experimental cases, where we try, we do not are in danger of getting priority to retain its existence unless in conflict with higher laws.

The subsequently added Zeroth Law

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

This law was added later, and it accounts for logical gaps in the set of robotic laws by Asimov. Beyond focus of a single human being, this law addresses mankind as a whole. The logic here is, don’t harm a human, don’t harm a group of people, don’t change the environment in a way that would indirectly harm humans or humanity.

Now things get tricky. For a full brief on risk and value calculation, please consider reading my article “When Self-Driving Cars Decide Who Lives and Who Dies.” For now, however, I’ll try to make it short as far as examples go.

Robots Face Expression Artificial Three Tongue Facial Mouth Movement Asimov Laws Robotics Blue White Background

Any kind of risk calculation from robots is likely to be relating to insurance data and this law zero. If a robot is seeing a risk, yet is in conflict as to who it should help, it will run calculations to make a decision. If both child and adult are at risk of getting injured, it might be that the robot calculates a larger value to humanity when saving the adult. Not a humane choice, but that wasn’t the law either.

But at least this Zeroth Law will account for overruling cold insurance data. As grim as it may sound, a person experiencing an injury that will handicap that person permanently has a greater insurance cost than a killed human. Taking this law into account, at least the robot would prefer a living being over a killed person. That’s good news, right?

Now to the final assessment. Are we taking this Zeroth Law of Robotics into account for the robots and AI controlled devices of today? Frankly, I don’t know. I can only hope that we are not only teaching robots to make decisions based on insurance costs. The well being of people should be a priority. Directly, indirectly, on the scale of humanity and the whole earth.

Summary and thoughts

Ok, what’s the summary? How do we answer the question, which the headline asks? Three times “no” as far as the public is aware and the bonus question only received a loose “I don’t know.” Not a very promising result.

How come that we discard everything like that? Shouldn’t these rules be somehow reconsidered? Even though they originate from science-fiction stories, there might be something essential about them. Perhaps one could also account for animals as far as including other lifeforms go as well? Food for thought…

Robot Toy Close Up AI Doll Face Asimov Laws Robotics Blue White Background

What about military usage? Usually it won’t be the industry to be the first investor such a technology. Would the Laws of Robotics be switched off for offensive robots? Should they all be set to defense mode only? Who’s calling the shots here and who is ready to pull the plug when things go wrong?

I don’t think we shouldn’t work on automation and robotics, but why not make sure our creation won’t become our peril? Perhaps we could think a little while longer about the theory before we apply each and every achieved milestone to be a product. More time spent on the “should” rather than the “could,” perhaps. More “what if” than “how to” to think about. Not only because they will replace a lot of workers.

I welcome the idea of building artificial entities. Personally, I enjoy that thought in many aspects. I only summon the scientists, engineers and decision makers not to handle this like the release of new software or tech gadget. Be smart.

What do you think? Are we on the best way to extinction, as portrayed in movies like the Terminator series or The Matrix? Or are robotics an unmatched advancement without risk? Can we coexist? Post your comments below!

Photo credit: Cristian UngureanuBruno CordioliSiyan RenCafe Neu RomanceDoc Chewbacca
Source: WikipediaHarriet Agerholm (Independent)

Was this post helpful?

Christopher Isak
Christopher Isakhttps://techacute.com
Hi there and thanks for reading my article! I'm Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say 'hi' sometime. ;)
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -