Sunday, 16 August 2020

The three laws of robotics, or: the measure of a man

Having recently read through Isaac Asimov's collected robot stories, I was given cause to consider the concept of the Three Laws Of Robotics, as they are commonly referred to. These laws are essentially conceived as fixed programming rules in the positronic mind of a robot, which 1) forbid a robot to hurt a human being or allow them to come to harm, 2) force a robot to obey a human's commands unless this conflicts with the first rule, and 3) allow a robot to protect its own existence, if this does not conflict with the first or second law.

The reasoning for enforcing these rules in Asimov's stories is simple: fear of robots. Because the humanoid robots in these stories are larger, stronger and generally more capable than human beings, the thought appears to be that with these rules in place, no robot could ever harm a human being, and would always sacrifice themselves to save a human life. If one considers robots to be unthinking machines, with nary a thought of their own, then this may seem like a completely valid way to maximise their use and benefit to humanity, while preventing any unfortunate mishaps that could result in the injury or death to a person.


In the 1980s sci-fi series Star Trek: The Next Generation (TNG), there is an episode titled 'The Measure of a Man' which revolves around the android called Data. As a one of a kind android, he managed to get into Star Fleet, ultimately serving with the crew of the USS Enterprise, under command of captain Picard. Also equipped with a positronic brain, he is not programmed with the three laws, but is free to learn and discover on his own. In time he comes to be accepted by the crew as a highly capable individual with his own sense of humour, individuality and preferences. His differences are seen as an asset to the crew, including on a personal level.

This all comes to the forefront in the aforementioned episode. A scientist working for Star Fleet wishes to understand how Data's positronic brain works, as so far only the elusive scientist who made Data has figured out how to stabilise a positronic brain. If successful in determining this, many more androids like Data could be manufactured for use not only on Star Fleet ships, but also in many other situations. Only catch is that this examination may end up destroying Data's brain through permanent depolarisation.


Initially, Data is merely treated as property, as his status within Star Fleet at the time treats him no different than any other piece of inventory. That's why Picard at first only receives the order to have Data transferred to the scientific department at Star Fleet. While Data seems accepting of the idea at first, it are Picard and then other crew members who step up to fight for Data. In a legal case, they attempt to prove that Data is in fact not a piece of property, but as close to a living, breathing human being as one can be as an android, and thus worthy of the same rights and protections as any other person.

In the end, Data wins this case and he is granted personhood. As Data is not fundamentally opposed to the concept of Star Fleet scientists understanding how to stabilise positronic brains, he suggests to the scientist that he will gladly work together with him and share data. Everything except for invasive or destructive examinations, as that would be neither ethical nor moral.


In Asimov's short stories, too, there is the question of where the line between a 'person' and 'property' lies. If a construct with a positronic brain is self-aware, capable of reasoning and lives a life that is essentially indistinguishable of how a construct with an organic brain would live it, then why could only the latter be a 'person', and would the former forever be condemned to live as property, if not also shackled by the Three Laws?

In The Bicentennial Man, the robot at the center of that story lives a life that is that of a person, yet who does not get treated as a person, because he is a robot. Despite working jobs, being a well-known artist and gaining the respect of the family which 'owns' him, he is not granted the rights and privileges that come with being a person. Which means being an organic human being. Even after changing his body to a more human-like appearance, thus becoming an android, the fact that he has a positronic brain instantly disqualifies him as a person in the eyes of society.

Ultimately, the bicentennial man is granted personhood when he proves that he is just as mortal as humans, by essentially destabilising his own positronic brain, resulting in his death after living for more than two-hundred years. All to gain this intangible property of being accepted as a person.


When it comes down to it, there is no way that we can deny personhood to any entity that is capable or is presumed to soon become capable of understanding what being a 'person' entails. An entity like Data in TNG or the bicentennial man are as human as you or I, capable of understanding emotions, perhaps even experiencing them, while enjoying every moment that they are alive and can be around those people who they care about.

In the case of a newborn child, we accept that their brains are as of yet incapable of producing the patterns required for them to achieve self-awareness, but that given enough time, they will be capable of this. That's why they are given into the care of adults, who can provide the safe, caring environment in which they can mature before they can assume the responsibilities of adulthood.

However, if personhood was just about cognitive capabilities, the fact of the matter is that part of humankind would not qualify. Think of those born with developmental issues, or who suffer brain damage or develop Alzheimer's. To most of us, they are still persons, not property or something less than that. This makes one wonder whether the true qualifier that makes us amicable to granting an entity personhood is whether or not it appears 'human' enough to us.

Think of the pets we keep and the human qualities we ascribe to them. Perhaps the problem with intelligent robots and androids (as well as artificial intelligence in general) is that it provokes a primal fear in some part of our lizard- or primate brain sections which makes us respond in a negative fashion against even the mere concept of something 'like us', but which is 'different' and possibly superior.

In TNG, Data's photographic memory, super-accurate timekeeping, calculation skills and immense strength initially lead to fear and distrust among those near him until they learn to see the person behind these skills and capabilities instead. This isn't too dissimilar from the distrust we can see in today's society as well, with the faster learning or otherwise 'different' children in a school often being subjected to bullying and finding themselves much more often alone.


So, I guess that in effect, I cannot say that I would ever be a supporter of something like the Three Laws of Robotics. To me it feels like an excellent way to impose something akin to chattel slavery upon individuals who have done nothing to deserve such a cruel fate. As people and persons ourselves, the onus is on us to recognise that perhaps the true measure of intelligence is to perceive and accept it in others, even if they are very much unlike ourselves.


Maya

1 comment:

Tigersharke said...

I wonder how much of the need to have those three laws was due to the potential of the human designer to create something not simply more powerful or more capable but more intentionally dangerous, like for military use. If such a robot were built whose design was to destroy, then its creators would wish to have some sort of failsafe. Possibly the more cautious part of such a team would advocate for or surrepticiously include those three laws. Assuming the whole were not carefully controlled and actually required them such as by direct government involvement, law, mandate, or regulation.

Even without there being those three laws built into a robot as literal laws, present designs still have their failsafe: an off button. We have not yet built, that I am aware of, a truly autonomous self-thinking machine which is permitted to go too far from its prescribed task. When we finally reach a level of sophistication which would allow a robot to do all of its own thinking which controlled its own actions by this independent thought, then a failsafe might be a good idea until it is proven to be unnecessary, we won't know until then.