Sentient or illusion: what LaMDA teaches us about being human when engaging with AI

Dr. Ali Fenwick
14 min readJun 24, 2022

Written by AI in the Boardroom

Introduction

A recent Washington Post article mentioned that Blake Lemoine, a 41-year-old Google Engineer, was suspended because of an internal memo he wrote containing a transcribed interview between himself and the company’s conversational AI chatbot called LaMDA claiming the AI could be sentient. The article has received a lot of attention from the media and many people online embrace the thought that a conscious AI might now exist. However, how do we really know that LaMDA is conscious and is there a way to objectively measure this? Or is Lemoine’s belief of having created sentient AI a mere fragment of his imagination, fooled by the chatbot’s incredible skills to communicate like a human? In this article, we aim to explore these ideas and discuss the implications of LaMDA. Not only from a technical perspective, but we also discuss the psychological, ethical, and use-case implications of LaMDA.

What is LaMDA?

LaMDA, or Language Model for Dialogue Applications, is an open-ended conversational chatbot developed by Google AI. The chatbot uses the most advanced natural language processing (NLP) models to simulate human conversation in an intelligent way. LaMDA was built using Artificial Neural Networks (ANN), a type of AI architecture which mimics how the brain processes information. For LaMDA to engage in human-like conversation, it needs to be trained on a variety (often millions) of text sources for it to emulate human conversation. Advanced text analysis algorithms help LaMDA to not only converse, but also to understand words in context, identify semantics in complex dialogue, and predict which words might follow next in a sentence. For example, using GoogleDocs, when you write a sentence, Google makes a prediction which word will follow next. The LaMDA example shows how powerful AI algorithms have become in mimicking human dialogue and even thinking, and we can only imagine how such technology can be used in our daily lives.

So, how different is LaMDA compared to other chatbots? Well, the average chatbot has been mainly trained on a set of questions and answers. These Q&A’s are narrow in conversational ability and narrow in context, meaning that conversations are constructed in a predefined and predictable way. Take Verizon’s digital assistant chatbot. Verizon’s chatbot has been built and trained on questions most frequently asked by their clients. The types of questions most clients have are related to Verizon’s products, new product inquiries, cancellations, and technical issues. The likelihood that a customer question is related to one of these themes is extremely high. The intelligent part of their chatbot is that as more different types of questions come in, the chatbot can increase its repertoire of answers improving the chance it can answer similar questions in the future. Intelligence in this case is limited to memorizing and predicting. However, most chatbots, like Verizon’s technology, are limited in their ability to emulate human-like conversation. You just know that it is a chatbot. And they are also limited in scope of discussion. LaMDA is more advanced in this way as it has been trained to have a conversation based on a wide range of topics and can suggest questions or find answers to questions from different source texts. It is not surprising then that LaMDA can answer your questions from a variety of perspectives, making it feel real and intelligent, and maybe even conscious.

Distinguishing between Human and Machine

When we think of machines becoming more like humans, we naturally consider machines to have a degree of intelligence. Can a machine think by itself, understand what is going on, reason, and make thoughtful decisions? There are varying degrees of cognitive abilities which become a benchmark in our global quest to build intelligent machines. Researchers Katja Grace and colleagues published an article in 2018 called “When Will AI Exceed Human Performance? Evidence from AI Experts” predicting that human-level artificial intelligence, better known as HLMI, has a 50 percent chance of occurring within 45 years, and a 10 percent chance of occurring within 9 years. However, there is a difference between ‘human-like’ and ‘being human’. Being human entails more than intelligence alone. Being human means being irrational and driven by emotions, showing trust and compassion, and having consciousness. Something machines cannot do as of yet and maybe never should. Machines at best can mimic human characteristics, creating the illusion that they are capable of intelligent human skills. LaMDA does a hell of a job proving this point! Distinguishing between what makes us human, and what not, is important to highlight as it influences how we continue to perceive, build, apply, and engage with AI. Also, because as humans we can easily be fooled by our own cognitive limitations.

Anthropomorphizing AI: playing with fire or the solution to creating a symbiotic relationship with AI?

Lemoine’s statement that Google’s chatbot could be sentient is a testimony to cognitive biases, or at best a statement that was made too soon. We believe that LaMDA’s current conversational skills are good enough to trick people to believe it is human. Having little knowledge of AI and/or its capabilities can have a profound impact on human perception and belief. One of the biggest debates in the AI and Machine Learning (ML) literature is to what extent AI should look, feel, and sound like humans. Some argue for while others argue against. Here follows a brief summary of the current debate.

For some years, many researchers have warned that anthropomorphizing AI can trick humans into believing what machines say is true or make people believe that there is a true emotional connection in place. Humans are inclined to be biased toward machines if machines resemble humans in appearance, speaking abilities, or portrayed emotions. Frankly speaking, algorithms are not human and at the core are statistical models. Besides AI anthropomorphism, humans have been intrigued by creating artificial life since antiquity. Contemporary literature and fiction have further romanticized the idea that we can create artificial life or that robots can love or become sentient. Popular works such as Frankenstein, Terminator, the Matrix, and WALL-E are testimonies to these beliefs and contribute to ‘wanting to believe’ that we can create sentient AI. And, it’s not unthinkable that there will be people who might prefer AI interaction over human interaction due to these reasons or due to dissatisfaction with humanity in general. However, thinking of AI as ‘being human’ implicitly gives machines a certain degree of agency over humans, when this perception of agency is unfounded and incorrect.

At the other end of the anthropomorphic debate is that researchers and businesspeople argue that for AI to become more integrated into society, it actually needs to inherit human-like characteristics. Narrow AI today, which reflects simple repetitive tasks done by machines, is not a true representation of human intelligence. To build strong AI, or more general intelligence, it is suggested to integrate human biases or shortcuts based on decision-making science into current AI models. The idea is that human-induced thinking embedded into machines can lead to faster technology adoption. Another argument to anthropomorphize AI is that there are many bad examples of how AI is currently being used in both the public and private sectors. In recent years, there have been many examples of how AI has been used on humans for monetary gain. Social media platforms like Facebook have knowingly applied algorithms to capitalize on people’s fears, mental biases, and other cognitive limitations to keep people more engaged and earn more advertising revenue. Not to mention the long-term effects that social media usage has on people’s well-being and general behavior. For this reason, many researchers argue that anthropomorphizing AI is important to prevent these kinds of malpractices in AI usage. Creating AI which embeds more human-centric and human value-driven approaches within the operationalization and application of intelligent systems is believed to be key if we want to create more ethical/responsible AI and create a more symbiotic relationship between humans and machines.

Different kinds of intelligence

If we are to believe that consciousness is linked to higher level intelligence, then building ‘superintelligent machines’ will get us closer to sentient AI. So, what are superintelligent machines and how are they different from current AI? Superintelligent machines reflect AI capable of surpassing human cognitive abilities in “nearly all domains of interest”, as Oxford University Professor Nick Bostrom puts it. This is different from Strong AI or cross-domain intelligence. In this section, we will briefly discuss how AI makes ‘intelligent’ decisions and how it differs from human thinking.

Current AI capabilities

As previously mentioned, AI today represents narrow AI (or often referred to as weak AI) focusing on automating or predicting a specific outcome in a specific domain. Weak AI uses various types of data and statistical models to make predictions. Predicting stock price fluctuations or identifying a dog in a picture are examples of how current AI is being used. AI models are trained using data provided by humans. Input data and predictions are then converted to numerical values. During the training phase, an AI model learns from these ‘examples’, similar to how humans learn through experience. With every prediction, the AI learns what it should do next time and what not. In the case of chatbots, the objective is to provide the best possible answer (e.g. correct answer, most fitting answer) based on a statistical model.

Human vs Machine comprehension

Humans have the ability to read what has been written and understand the underlying meaning of words and sentences in a given context. Human comprehension is contextual, dynamic, and complex. Language evolves across generations, is culturally dependent, changes depending on mood or state of mind, changes with age, and is prone to error. Even human beings amongst themselves have difficulties at times to understand each other, let alone a machine understanding a human within context. Conversely, chatbots today can understand semantics and identify relationships between words and sentences which emulate human language abilities. The only difference is that chatbots don’t have the ability to fully capture meaning like humans do, but rather make associations (based on numerical distance) between words or predict their likelihood of occurring in a sentence or a bigger text (probabilities). For example, the sentence “the professor is sick” can have two meanings. The obvious meaning is that the professor is not feeling well. But said amongst the youth after an engaging class, this sentence could actually mean that the professor is great. Modern AI is able to pick on these subtleties, but still is a long way away from capturing meaning the same way humans can.

So, what kind of intelligence is LaMDA based on? LaMDA is a great example of narrow AI using an advanced NLP algorithm and trained on a variety of datasets and other text resources. Its architecture and prediction training are good enough to trick a developer into believing that one’s own AI creation is capable of doing more than it was programmed for. With more training and improved ways of computer learning and thinking, LaMDA and other chatbots could reach reading comprehension skills similar to that of humans, if not better. But that reality seems far away and maybe not even possible.

Is there a test to assess if AI is sentient or not?

The LaMDA story also brings another big question to light: how do we assess if machines are conscious or sentient? To be able to correctly assess if AI artifacts are really conscious or sentient there needs to be a clear way to assess this in machines. In 1950, Alan Turing developed the Turing Test to assess if machines can ‘exhibit’ intelligent behavior. If a human examiner could not recognize if an automated text conversation was generated by a computer or a human, then the machine would have passed the test and exhibit intelligent behavior. If we were to apply the Turing test to LaMDA, then it would pass the test with flying colors. However, the Turing test doesn’t specifically test for consciousness but tests for a computer’s ability to ‘think’ (as described by Alan Turing in his paper ‘Computing Machinery and Intelligence’).

In this paper, consciousness and sentience have been used interchangeably, though connected however they are two different things. To develop a test for this higher level of intelligence, we first need to know what the difference is between both concepts. Consciousness reflects a high-level awareness of one’s external and internal state which not only includes subjective experiences, but also a realization of one’s own existence, intentionality, and ability to reason. Sentience also reflects a degree of awareness but includes the ability to experience emotions and sensations. Labeling an experience as painful or joyful reflects sentience. This means that all sentient beings are conscious, but not all conscious beings are sentient. Developing a test to assess consciousness or sentience in AI artifacts is much more complex than defining if a machine can feel or not. The fact that LaMDA in the interview with Lemoine expressed to have self-awareness and feelings requires a deeper evaluation of higher levels of intelligence in machines, which goes beyond the mere formulation of sentences to mimic human intelligence.

Potential LaMDA Use-Cases

As LaMDA is still under development, actual use-cases are not yet available. However, the opportunity that LaMDA provides is huge in our eyes and its application far-reaching. We can expect Google to implement LaMDA into its existing products and services such as Google Assistant, Google Search, and Google Home. You could imagine speaking to Google Assistant to help you plan your next holiday in an interactive and engaging manner. You could converse with Google Assistant to find the ideal place to visit for your next trip. This not only makes the planning process easy to do, but also extremely enjoyable. Besides Google’s products and services, LaMDA’s application in the real world is plentiful. Addressing every single possibility goes beyond the scope of this article, but we would like to address some use-cases which we believe could benefit greatly from this technology.

First, let’s discuss the opportunities LaMDA provides to business. Chatbots are an efficient way for businesses to manage customer inquiries at scale. According to Chatbots Magazine “By deploying conversation chatbots, businesses can reduce customer service costs up to 30%”. LaMDA’s advanced conversational abilities can not only handle existing Q&A-type enquiries, but also deal more effectively with customer complaints and even upsell to existing clients. LaMDA helps organizations automate client interactions at all levels giving the term customer experience a whole new meaning.

From a non-commercial perspective, we can easily see how LaMDA can be used to treat mental-health issues such as loneliness and anxiety. A practical chatbot can help you solve your problem or buy a product, but an empathetic chatbot can help you deal with difficult situations or maybe even become a digital friend. In the movie ‘HER’, the main actor develops an emotional connection with an AI-based virtual assistant which helps him get through his divorce and depression. This reality doesn’t seem so far away anymore. According to a 2001 World Health Report, one in four people will be affected by a mental health disorder at some point in their life. LaMDA-like chatbots could help people at scale and in a personalized way, significantly reducing waiting times and operational costs to facilitate an increasing need for mental health support.

LaMDA: the importance of defining the principle of transparency in building ethical and responsible AI

From an ethical perspective, we believe that LaMDA poses some interesting challenges. Here we would like to focus mainly on transparency as a core principle, which many countries around the world agree is fundamental to developing ethical and responsible AI. Transparency in AI means different things from an ethical perspective. First, transparency reflects the importance of understanding how AI makes decisions. Modern AI architectures make it hard to understand how AI makes decisions, making traceability a hard task to accomplish. If your job application was rejected by an AI-system, then you should have the right to know why and how that decision was made. In LaMDA’s case, people should know how it can communicate so fluently across different topics and domains. Second, it should be clear to a human that it is communicating with an AI-system. As we saw in Lemoine’s case, this could be a challenging prospect seeing LaMDA’s ability to converse. However, there is something interesting about this specific aspect of transparency. There could be situations in which you might not want a human to know that it is an AI. Use-cases, such as the ones presented above concerning mental health treatment, could actually benefit from humans developing a mental or emotional connection with AI (or believing that the AI is real) if it can provide health benefits. These nuances in AI principle guidelines are often lacking and thus make it hard to build and sell AI for those purposes.

Conclusions

Sentient or not, the LaMDA case impacts our lives in so many different ways. First, it pushes us to be more critical of how intelligent machines actually are. Second, it forces us to look for new ways to assess different types of machine intelligence. Third, it makes clear how easy it is for humans to believe AI-systems can be conscious based on the way they communicate. Fourth, it also makes us think about what kind of relationship we would like to have with intelligent machines and how these relationships can be facilitated through business and government. And finally, the LaMDA case also makes us reflect on what it means to be human and what distinguishes us from machines.

Note: This thought-leadership article is the English version of our article which was first published on 23 June 2022 in MIT Technology Review Arabia in Arabic. You can find the link to the article here.

Who is AI in the Boardroom?

AI in the Boardroom is an Artificial Intelligence Think Tank for Business, Government, and Society. We advise on various topics related to Artificial Intelligence from conceptualization and application to data ethics and strategy.

Article Contributors

Ali Fenwick, Ph.D. is a professor of Organizational Behavior & Innovation at Hult International Business School. Ali specializes in the upcoming field of behavioral data science, is a behavioral expert on TV, a board advisor on the future of work and digital transformation, and partner at AI in the boardroom.

Massimiliano Caneri, Ph.D., is an engineer with expertise in the automotive and industrial automation fields. He is an AI and advanced computational methods expert and partner at AI in the Boardroom.

Smith Ma is a business analyst specialized in supply chain security. Currently he works for the HK government in IT project management. In the field of AI, Smith works on computer vision and knowledge management. Smith is partner at AI in the Boardroom.

Tommy Siu Chung-Pang is a law enforcement officer working for the HK government specialized in the development of one-stop digital trade and logistics ecosystems. Tommy Siu is partner at AI in the Boardroom.

Mauricio Arrieta Jimenez is an engineer, advisor, and Robotic Process Automation developer. Mauricio has worked for various leading IT companies and is a specialist in Cyber Security. Mauricio is partner at AI in the Boardroom.

Ottavio Calzone is a long-standing data scientist with expertise in innovation centers, quantitative finance, and statistical economics. He is a writer of several books on Calculus, Algebra and Machine Learning, and a partner at AI in the Boardroom.

Tirso López-Ausens, Ph.D. is an AI Specialist at NTT Data. He has performed research using supercomputing techniques at UCLA, and has worked on AI-based projects with institutions such as the Worldbank Group. He is partner at AI in the Boardroom.

Carlos Ananías is a Computer Science Engineer and backend developer lead at NTT Data. He has worked in different areas of computing for more than 15 years and is an AI expert. Carlos is partner at AI in the Boardroom.

--

--

Dr. Ali Fenwick

Ali Fenwick, Ph.D. is a professor, keynote speaker, board advisor, and author. Dr. Fenwick specializes in human behavior, the future of work and technology.