Isaac Asimov was an author, and among the first to work on the theory of robotics, both of ethical and technical nature. He created the 3 laws of robotics that are to be written into the program of robots along with the definition that the robot itself knows it’s a robot. The 3 laws of robotics are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In this video, by the Computerphile channel, Rob Miles shares his thoughts on why he thinks, that Asimov’s laws don’t work. Artificial intelligence and robotics are becoming less and less science-fiction and more and more near future technology expectation nowadays. So this is a very interesting subject to learn about. Many thanks to Rob for sharing this with us!
YouTube: Why Asimov’s Laws of Robotics Don’t Work (Computerphile)
Photo credit: Leo Zeng
Hi there and thanks for reading my article! I’m Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say ‘hi’ sometime. 😉