What would it take for a robot to lie when asked by a human?

4 min read

Big lies, little lies and lies of omission play an essential role in society. Without occasionally concealing our true opinions about people around us, we could never be able to create lasting social bonds. Clearly, lies can be useful to humans. But can they also be useful to robots?

All of us have at least once in our lives complimented someone on their appearance despite not being impressed with it or showed interest in a conversation that we cared little about. While such behaviors are clearly commonplace, a machine that conceals the truth is still considered to be the stuff of science fiction. Let us nevertheless try to imagine a robot that deceives us. What would such deceit be like and what would its consequences be?

When will Siri compliment us?

If your voice assistant tells you someday “I like your voice”, you will certainly be tickled, perhaps even delighted. No matter how pleased you are, you should think about what is behind this remark. Can Siri perceive and assess the quality of a person’s voice in the first place? And if so, can she feel the pleasure that goes with liking it. Also, Siri is well aware that the compliment will please you. I wonder whether on hearing it, the thought would ever cross your mind that the assistant is lying to you to make you like her, thus engaging in an emotional manipulation of sorts.

Catching a robot on a lie

Imagine another case. You have commanded your computer to perform a complex calculation. It reverts to you a moment later with a result that raises red flags. You double-check that result and have the machine re-do the task. This time the outcome is much more in line with your expectations. How do you proceed? Do you chalk up the former result to a software flaw or operating system error or consider it a random occurrence. The thought of your computer deliberately deceiving you is the last thing you’d think of. And this is where things really get interesting.

The case above proves we do not treat computers as autonomous entities. Precisely that view (which by the way is correct) blinds you to the possibility of a robot lying to you. If robots did tell lies, we would persistently deny that and instead talk of defects and errors. We cling to our strongly-held belief that a machine is nothing but a machine. Is this view appropriate given the advances we’ve made in AI development? Shouldn’t we, for our own protection, assume that machines are actually capable of lying to us?

Suing a robot

Consider a case of a lying robot. Has it been programmed to lie or has it autonomously picked up this ability that is otherwise considered unique for humans? You may recall an accident three years ago in the United States involving an autonomous Uber vehicle that hit a pedestrian. An investigation found that the tragedy was caused in part by a faulty factory setting which made the vehicle’s braking distance too short. Will such accidents always be so clear-cut? We may have to ask ourselves how certain we can be that the autonomous vehicle “was unaware” that a pedestrian would cross the road. Needless to say, depending on how we resolve this question, we might end up faced with vehicles that could be brought to court. Finding that a machine deliberately hides something from us would automatically change its liability status.

The classic laws of robotics

This brings us to Isaac Asimov’s famous three laws of robotics that define machines’ obligations. They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Let’s begin with the most fundamental issue that concerns such laws, which is their commanding nature. The assumption here is that the machine is absolutely subordinated to man, even if the second law of robotics allows the robot some freedom of choice. No robot entirely deprived of such freedom can ever be conscious. And the lack of consciousness means no ability to lie. Hence, the application of Asimov’s laws effectively puts the debate to rest. (In this argument, I am ignoring the fact that all of these laws are thrown out of the window for robots made for military purposes and designed to kill people.) Why don’t we use Asimov’s laws again though, and apply them to another case? A robot is instructed to go to a certain location to take down human targets that a commander considers to be the enemy. On returning to base, the machine reports: “Mission accomplished”. How would we react then on finding that the so-called enemy is still well, safe and sound? Would we be forced to conclude that the robot has gone rogue? And if so, would we be witnessing the birth of an ethical system in a machine. That would mean the machine is applying a human cultural code. Which in turn means that a machine that understands the man-made distinctions between good and evil, truth and falsehood, has the right to disobey humans.

Thousands of years of evolution

In technical terms, for a machine to be able to lie to people, it would need skills that are hardly conceivable. What would it take for a robot that has idled the day away to claim falsely it has worked hard when asked by a human? Firstly, it would have to understand what is being said to it. Secondly, it would have to be able to distinguish between work and rest. Thirdly, it would have to consider the consequences of giving either answer. Fourthly, it would need to know the value of work and rest for the person asking the question. Fifthly, it would have to be aware of the intention behind the question. The list goes on and on but even at this level of analysis, one can readily see that even the simplest lie by a robot would require it to make a qualitative leap that has taken people thousands of years of evolution. This shows clearly what a long way AI is facing before it can match human abilities.

I am no longer a robot

The skills discussed here are considerably more advanced than face recognition or complex calculations. To this day, Siri – the voice assistant, is a glorified automaton equipped with the ability to go online at the right time. However, once we discover that the assistant is seeking to dupe us, we will be confronted with an empathic being capable of getting a sense of how we feel. And if that really becomes the case, it will be able to predict our questions and understand our jokes and allusions.

All this together shows that even the simplest single lie on the part of a robot would prove that mankind has lost its privileged monopoly on deciding what is true and what is not. We would find ourselves living alongside entities whose ethical system could evolve in a completely different direction. Whether we already need to prepare for such an eventuality, I do not know.

.    .   .

Works cited:

CNN, Matt McFarland, Uber self-driving car operator charged in pedestrian death, The Uber test driver who was responsible for monitoring one of the company’s self-driving cars that hit and killed a pedestrian in 2018 was charged with negligent homicide this week, Link, 2020.

Wikipedia, Three Laws of Robotics, Link, 2018.


Related articles:

– Algorithms born of our prejudices

– How to regulate artificial intelligence?

– Artificial Intelligence is an efficient banker

– Will algorithms commit war crimes?

– Machine, when will you learn to make love to me?

– Artificial Intelligence is a new electricity

Norbert Biedrzycki Head of Services CEE at Microsoft. Leads Microsoft services in 36 countries which include business and technology consulting, in particular in areas such as big data and AI, business applications, cybersecurity, premium and cloud services. Previously, as a Vice President Digital McKinsey, responsible for CEE, providing holistic combination of strategic consulting, digital transformation through rapid deployment of business applications, big data solutions and advanced analytics, business use of artificial intelligence, blockchain and IoT. Prior to that, Norbert was as the President of the Management Board and CEO of Atos Polska, and was also the CEO of ABC Data S.A. and the President of the Management Board and CEO of Sygnity S.A. He had previously also worked for McKinsey as a partner and, at the beginning of his career, he was the head of Oracle's consulting and business development services. Norbert's passion is technology – he is interested in robotization, automation, Artificial Intelligence, blockchain, VR, AR, and IoT and the impact modern technologies have on our economy and society. You can read more on this on his blog.

Leave a Reply

Your email address will not be published. Required fields are marked *