Sith'ari Azithoth

AI serves no legitimate or even rational purpose to pursue to begin with. Machines should be limited to intended function capability and expanded upon. There is no need to replicate the level of consciousness that humans have, that is just a bad idea from the onset.

Chris de Vries

This is just the kind of person who can't stand up in the morning because he could not define the definition of "waking up." You have to start somewhere and make assumtions to get anywhere in life. You don't have to define everything before you do something. That is the way baby's lern. Do and see, Do and feel, Do and taste, Do and listen, Do and smell….. and then learn. Maybe in the future machines can learn like a baby so you could explain what is acceptable and what is not just like you have learned when you were young.


And then you have to define harm. What is harm? Anything that hurts? Pushing someone out of the way of a bus might leave a bruise or so, that's harm, can't do that. There are situations where harm might be necessary to survive, such as an amputation. It would be very hard to program that kind of decision making into a robot. If we just had a general case where a robot can harm if the total result is more positive, then we may have robots harvesting our brains to preserve them until they can implant them into a more stable housing.

Dr. Plague

I feel its more of a framework than the actual law itself, you say since we have to define it. It seems more like the basics, such as when you say killing people is illegal, that can also be interpreted as vague, basics will always need to be worked out because everything is vague and words only have as much meaning as we give them. But how fireworks became machine guns all frameworks become works.


hold up! I was taught in primary school those 3 laws of robotics. 10 years later, just now, i learn these rules arent actual rules but something from a Sci-Fi story?

Laurentiu Badea

The laws of robotics as written by Asimov, who knew nothing of computer programing or how programming languages work, were indeed written for an artistic purpose. I don't think Asimov took them seriously. However, even as brief, broad, unspecific and generic as they are, the laws could work if, obviously, rewritten. Sure, don't "harm" humans, although why stop at humans? Why would robots be allowed to harm animals, unless they work as butchers? That's were specifics would have to be implemented, and with patience and calm it can be done. Through sensors, visual, auditory and other, they could tell when a "human" — with relative proportions (although there are many proportions to be calculated, including missing limbs or different heights, a mess to implement) — is in a state of trauma. Police robots, if they'd exist, might need to harm humans, they need to tell who and what a perpetrator is and how much pressure to apply on a human's body before gravely harming it. Harm therefore must be specifically defined with its many details in the programming library. Obviously a mess, but at some point it has to be done. Not a maybe, it will, if we want to see robots with AI walking down the street. Where I'm going with this is that one way or another, Asimov's laws of robotics are going to be applied, but evolutions in robotics and AI programming won't be as simple as that, won't be as flawed as that, but incidents will occur no matter what, and they will occur, it's not a matter of maybe.

Bad Jawa

I need to watch your video, but from the title, Asimov's rules were not designed to work. On the contrary, they were writing devices that were designed to develop conflict in his stories. It's evident in his robot stories. In "Naked Sun", part of the plot is based on the issues within and between the first, second, and third laws.


Even if you sufficiently worked out the definitions, what about possible contradictions? You see someone being violently assaulted. You order your robot to stop it. It has to obey you. But to stop the attack, it has to harm the attacker. Yet, it can't, because harming a human, including the attacker, isn't allowed, thus overriding your order. But if it does nothing, the victim(also human) continues to come to harm through the robots disobedient inaction. But let's say the robot is completely autonomous and doesn't need orders to act, but must still abide by the law not to harm or allow harm. Again what does it do? To act is to harm, to not act is to allow harm. The Three Laws really aren't practical, or possible in a lot of ways. And, as we know, they weren't really meant to be. Some things belong only in the stories………