The Three Laws of Robotics are also known as The Three laws or simply Asimov’s Laws. These laws are a set of rules coined by Isaac Asimov, a science fiction author, who introduced them in his short story ‘Runaround’ (1942).
Asimov incorporated these three laws into just about all the robots that appeared in his stories. They are intended as a safety feature not only for humans but also for robots. The workplace robot has to be safe to work around. The industrial robot arm can actually be dangerous if proper measures are not put in place to ensure safety.
The Three Laws of Robotics
A lot of Asimov’s stories that feature robots have them behaving in ways intended to detail how the robots apply the three laws in the situations created for them. Many sci-fi writers have adopted these laws and incorporate them in their robot stories.
The laws are as follows:
First Law
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Second Law
“A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
Third Law
“A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
Asimov and other authors have made alterations to the original laws in their works of fiction. The reason for the alterations is they wanted to further develop how robots interact with humans as well as with each other.
The Zeroth law
This law is also known as the fourth law. In later works, Asimov’s robots take responsibility for human civilizations and sometimes, even entire planets. A fourth law that precedes the others was therefore introduced. It states:
“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
All four laws are referenced in many sci-fi books and films as well as other media. It was inevitable that they would find their way into the ethics of Artificial Intelligence (AI) eventually.
According to Isaac Asimov, the three laws are the brainchild of John W. Campbell. During a conversation on the 23rd of December, 1943, the three laws came up. However, Campbell claimed that he came across the laws in Asimov’s stories. All he did was expound on them during their discussion.
Application of the Laws
Asimov believed that the laws were obvious from the start and as such, he should be lauded for creating them. He claimed that the laws were simply not written down. He added that the laws apply to every tool used by humans.
He explained the laws thus:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
In Asimov’s words, a tool should be safe to use. Tools are fortified with safety features to make them safe to use. However, that does not mean that the user will not harm themselves with the tools. An injury that one inflicts on themselves cannot be blamed on the tool. Instead the user can be said to be inept.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Unless the tool’s function ends up harming the user, it should perform the function for which it was intended effectively. In such an event, the tool must be turned off so that the user is not harmed.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This law intimates that a tool must remain it its best condition unless it becomes inevitable to destroy it for safety purposes or better application. The user should ensure that they wear protective gear in order to avoid being hurt in the course of using the tool.
Conclusion
In a nutshell, Asimov meant that humans should also follow the laws in an ideal situation. He believed that the laws are the best way to deal with humans who, unfortunately, are given to irrational moments sometimes.