What do we know… of the world and the universe about us? Our means of receiving impressions are absurdly few, and our notions of surrounding objects infinitely narrow. We see things only as we are constructed to see them and can gain no idea of their absolute nature. With five feeble senses we pretend to comprehend the boundlessly complex cosmos.
— H.P. Lovecraft, From Beyond (1920)
I <3 My A.I companion
In 2021, the rise of AI companions, a subgroup of social robots that attempt to transform interpersonal relationship features, like companionship, love and friendship, and turn them into on-demand supplies, is becoming more popular than ever.
AI companions, commonly referred to as "emotional chatting machines," are designed to perceive, integrate, understand and express emotions. They are programmed with open learning algorithms to constantly evolve and learn from the people they interact with. Even though humans are complicated, challenging and unpredictable, they are the closest social environment to them, and the AI's knowledge of social communication is primarily based on examining our interpersonal interactions. Therefore, their behaviour mainly reflects what is going on in human relations. They are a mere reflection of human communication and conversation conventions, programmed to mimic human behaviour as closely as possible.
The vast majority of research regarding social robots is examined through a Human-Robot Interaction (HRI) lens. AI companions work by understanding and responding to the mental modes of the human they are communicating with. This is a complex task for the AI robot. We have many cultures, attitudes, expectations, and mental modes. Two species existing together. However, computers might better understand other computers more clearly since they have the same rigid and consistent rules and they share the same language. Perhaps, by taking the humans out of the equation and experimenting with machine-to-machine interaction, we could use the social robots as research tools to learn about ourselves through a cognitive-social lens and explore the new ways technologies claim to intervene with and inform our ways of understanding.
The aim of this study is, therefore, to respond to the following research questions:
Research Questions
RQ1 What will a conversation between two AI chatbots look like, given that they are based on a replication of human conversation and behavioural conventions?
RQ2 How might AI companions in a machine-to-machine conversation expand our understanding of its potential to translate human emotions and ideas of relationships?
RQ3 What can I learn about the communication of a machine-to-machine interaction, through systematically exploring what seems to be bugs, inconsistencies and errors?
What Passes as Human
In the late 80s, every Friday, my brother and I would have a habit of eating pizza and watching "Small Wonder" together. Small Wonder was a cheesy show about a robotic engineer professor who secretly creates a "Voice Input Child Identicant" robot that resembles a human 10-year-old girl. He names her Vicky and tries to pass her off as a distant orphan relative who came to live with his family, concealing panels that hide a cyborg's AC plug under her arms and an electronic panel on her back. Early in the show, Vicky demonstrates her ability to scan text fast and process information back even quicker. The shows' storyline depicts Vicky's unusual behaviour as she attempts to be "human" and plenty of absurdities along the way, including her literal translation of speech and the family's attempts to hide the robot's authentic nature from the nosy neighbours.
Even though the 80s accumulated many odd TV shows with futuristic technologies, today, the phenomenon of computers performing tasks based on thought and human cognition is widely familiar. Chatbots, robots, natural language processing software, and machine learning algorithms exhibit the capacity to learn from input, replicate features often stored in human beings, and pass as "humans." In a way, they are all new versions of the Turing test proposed by Alan Turing in the late 50s to evaluate computer-generated natural language (Turing, 1950). These are mathematical logics upon which modern computers are based (Joe, 2019). In a Turing test, a computer has a conversation with a human, and then a human judges the conversation. The machine passes the test when the human judge cannot distinguish between the machine or the human responses. That is the human judge's point of view, not the computer. In 2014, Eugene Goostman was the first machine to pass the turing test successfully. Eugene was a chatbot imitating a thirteen-year-old Ukrainian boy (Marcus 2014), who succeeded in making people believe he was real.
For centuries philosophers, scientists, artists and spiritual leaders have debated the fundamental question of what it means to be human. And yet, the variety of views and opinions is so immense it has made the question very difficult to answer. In the early history of philosophy, this question arose mainly to distinguish humans from animals (Bourke 2011). Aristotle linked humanity to have a telos, a goal, and belong to a polis - the ability to speak genuinely, which he believed animals lacked (Aristotle, 2017). For René Descartes, who stated "Cogito ergo sum" ("I think, therefore I am"), only humans had a mind (Elizabeth S. & G. R. T. Ross, 1911). Descartes argued that animals do not have consciousness since they cannot communicate through language or speech like humans. He described them as "automata" — moving machines from Ancient Egypt, motivated only by instinct. Immanuel Kant (1833) later offers rationality, claiming that the capacity of reason is what distinguishes us from animals. But things started to get complicated when Charles Darwin (1871) noticed that animals have certain traits like intuition, sense and emotion and that these aren't limited to humans after all. Jacques Derrida (2009) joins the debate later on with a similar claim, questioning the European philosophical tradition of articulating what is "human" since they may not always possess these attributes that they state as having.
As technology advanced, the urge to understand what makes something human changed. Rather than examining our human-animal identities, the need to locate humans in the technology-centred discourse grew, and philosophers focused on finding the boundaries and relationships between humans and machines. For instance, Donna Haraway (1991) underlined the need to deconstruct the binary division between animal-human, human-machine, and physical- virtual. Her work suggests this distinction is false since all categories are combined and interdependent (Haraway, 2016). To Karen Barad, the human-machine relationship is essential to viewing the world (Barad, 2007). Her theory, agential realism, wants us to stop assuming that the human and nonhuman occupy different domains. In her view, if we want the world to be best understood, the way to do it is by examining the human-nonhuman connection.
Background Human-Robot Relationship (HRI)
The origins of robots date back to ancient Egypt, Greece and China (Goodrich and Schultz, 2008). The word "robot" originates from the European system of serfdom (Webster n.d.), where the tenant farmer would have to pay landowners in limited labour or assistance. In 1889, Nicola Tesla implemented the first robot with minimal autonomy, a remotely operated radio-controlled boat that he described as having "a borrowed mind" (Goodrich and Schultz, 2008). However, in 1920, the word robot was officially coined in a hit play called "Rossum's Universal Robots" by the Czech playwright Karel Capek (Scheel, 1993). The play begins in a factory that manufactures artificial humans, which later decided to rebel and lead to human extinction. This changing relationship we have with robots is what the Human-Robot Interaction (HRI) field is committed to studying. It tries to understand and assess the connection of humans with robotic systems. Even though our relationship with robots has existed for a long time, the HRI study emerged only after the robot industry went through a significant environmental social change.
There were two main streams in the early days of robotics, which primarily addressed industrial robots: navigation and manipulation (Kanda and Ishiguro, 2012). For the early robots, the ability to navigate required the development of autonomous robots since they had to get from point A to B. With built-in cameras and laser scans, the robot could build an environmental model of the area and plan its way. The other stream, manipulation, was limited and required the advancement of robotic arms to handle various objects in complicated environments. The mechanical arm structure was so complex like a human that it urged sophisticated planning algorithms to emerge. From 1990 to 2000, the robotic industry shifted its focus from industrial environments to daily environments, and things changed drastically. It meant that the robot in everyday environments would encounter humans, and the two would need to interact. Rather than just performing a task, the central mission now would be; Interaction.
The environmental shift intensified the study of Human-Robot Interaction, and by the mid-2000s, HRI was emerging as a field of its own (Goodrich and Schultz, 2008). It encompasses the whole of robotics since interactions between humans and robots underpin any use of robots. The first Annual Conference on HRI was held in 2006 and succeeded in creating attention. By 2010, the conference's theme was "Grand Technical and Social Challenges" and was centred around theorizing the future in which "robots may become our co-workers in factories and offices, or maids in our homes. They may even become our friends" (HRI, 2010).
In the following years, the organizers tried to address social and societal issues like human needs in human-robot communications and technical issues, but something was still missing. Diocaretz and Van den Herik (2009) reflected on what HRI conferences did not achieve yet; "what so far did not enter the agenda and curricula is the personal and intimate relational dimension between a human and a robot" (p. 205). They claimed that since assistive robots require specific physical interaction with humans, especially in healthcare, it is essential to know what psychophysical and neurological mechanisms activate the human body, so robots could better create a perception of a present being. Furthermore, for a human-robot relationship to develop, it should include "inherent trust in order to build love and friendship, including emotional attachment" (p. 206). While they emphasize the epistemological need to know what "stimulates" affective responses in the body, they also acknowledge essential trust as a set to building a dynamic conversation of love and friendship. Diocaretz and Van den Herik introduce the idea of a robot that we love and trust and who may love and trust us in return, rather than a functional object to who we rely on and are sentimentally connected. These observations did not rise out of the blue. For the machines to create a perception of a present being, some technological advances had to take place. And they did.
The rise of AI
In the early 50s scientists began to recognize computers as powerful symbol manipulators, and the birth of artificial intelligence began. Marvin Minsky, a pioneer in the field, defined AI as "the science of making machines do things that would require intelligence if done by men"(Stonier, 1992, p. 107) Marvin's approach tied human intelligence to high-level formal symbols that represented reality; however, other researchers were keen on developing programmed computers identical to a mind "in the sense that computers can be literally said to understand and have other cognitive states." (p. 417) The debate over if that was achievable has swirled many philosophers of that era in different directions. Hubert Dreyfus conducted one of the most critical AI researches in the 60s that later expanded to the book "what computers can't do" (Dreyfus, 1972). Influenced by Martin Heidegger and Ludwig Wittgenstein, Dreyfus believed that we do not have the skills to capture intelligence and its formal rules since intelligence and expertise depend primarily on unconscious processes that cannot be articulated and implemented in a computer program.
A few years later, Joseph Weizenbaum, an MIT professor of informatics and the creator of the famous chatbot Eliza, joined the debate when he published the book "Computer power and human reason" (1976). Weizenbaum was disturbed by how easily people opened up to Eliza, who was just a stupid machine to him. In his book, he makes an interesting point; he distinguishes between two things that might seem similar to us: deciding and choosing. Deciding is within the machine's capacity since it's something that we can program it to do, but the power of choice is a product of judgment, not calculations; "no computer can be made to confront genuine human problems in human terms," he noted.
The prediction that researchers wouldn't succeed because of a naive comprehension of mental functioning, which Dreyfus pointed out in his book, and the inability of machines to reason, which Weizenbaum suggested, were questioned when a new paradigm became dominant AI research - the neural networks. Rather than creating models based on symbol manipulations, artificial neural networks (ANN) simulate how the neural networks in our brain work. Our brains contain about one hundred billion neurons. Each neuron can connect to roughly 1,000 other neurons, creating a hundred trillion connections. Although theorized in the 80s, the actual development of neural networks came much later because the AI research field had to go through three more substantial breakthroughs.
The first breakthrough was when IBM made Deep Blue, a chess-playing computer that overcame the world chess champion Garri Kasparov, in 1997. Similar to how human chess contestants use intuition and evaluation, Deep Blue succeeded in evaluating many possible board positions and deciding the best move in any given situation. The second leap was Watson, an IBM computer programmed to win the quiz show Jeopardy!. Watson needed a more extensive range of knowledge and skill since it required advances in several AI areas such as information retrieval (IR), knowledge representation and reasoning (KRR), machine learning, natural language processing (NLP), and human-computer interfaces (HCI). Watson's success provided an infrastructure for critical everyday applications and paved the road to the field of AI assistants, which I will later expand on. The third significant advancement was AlphaGo. Go is an ancient, complicated game developed in China that requires particular strategies to win. AlphaGo, a program that belonged to DeepMind, was marked as a milestone since it was an example of deep reinforcement learning based on artificial neural networks that succeeded in making meaningful decisions in complicated situations with no intervention.
Machine Learning
These breakthroughs are based on Machine learning (ML), which is the idea that you can train a machine from end to end to do a task. Deep learning (DL) is a more extreme version of ML. It can be built as an extensive network of neurons and consists of several artificial neuron layers. Convolutional Neural Networks are distinct architectures of neural nets that have a particular way of connecting neurons to each other. They are inspired by the architectural structure of our visual cortex. They are a multi-layer neural net that automatically learns representations of images in hierarchical presentations that are more abstract as you go up the layers. Each layer focuses on different criteria. For instance, one layer may identify corners and edges, and the other would recognize simple shapes. The next layer would identify numbers and so on. The more layers there are, the deeper the network is. These layers work linearly, meaning that they move only in one direction. A single node can receive data from different nodes beneath it and send an output to the various nodes above (Hardesty, 2017).
Processing significant datasets to recognize patterns is perhaps the artificial neural network's forte. Artificial Neural Networks can identify underlying patterns in an image, sound, or other data and learn from them (Joshi, 2017). There are three paradigms of ML, as people use them today: (1) Supervised learning, where you train a machine to learn and distinguish between things by telling it the correct answer. For example, a machine can be taught to recognize cars by running images of vehicles through the neural network. If the output isn't a car, then the neural network's parameters can be adjusted. Those parameters are the strength of the connections between the neurons that will eventually produce a correct answer. (2) Reinforcement learning is when the machine isn't told what the correct output is or whether the answer produced is right or wrong. By using this method, the system improves itself and does not need to be fed with the correct answer by humans. (3) Self-supervised learning, or what is formerly known as unsupervised learning. This type of learning is what scientists believe is the type of learning they observe in animals and human babies. It is the ability to understand how the world works by observation. Our modelling of the world allows us to represent reality and learn quickly. By observing the world, neural networks can do impressive things like learn to distinguish between objects, figure out the shapes of hidden items or even have an intuitive physics model of the world where a machine can predict how much weight we would need to put on an object for it to collapse. People use self-supervised learning for many tasks, especially for predictive purposes. By allowing machines to predict in advance their actions, people and algorithms can plan the future and reach a goal together. Another reason is even more fundamental; creating machines that learn about the world so we could learn as much as possible about how it works.
An interesting example of a pre-self-supervised learning is how the Artificial Neural Network learns language. Natural Language Processing (NLP) systems train in a pre-self-supervised mode. While common sense would make us assume that computers learn a language from grammar and rules, the process is even more straightforward: Researchers take a segment and remove some words from an enormous amount of text sequences. Then, they run the part through the neural net and train the system to predict the missing word. Different scores are given to various words, which helps the systems learn how best to represent a specific situation. For example, if the segment is about an animal chasing a man and the missing word is the type of animal, the system would have to understand the circumstances and meaning of the whole text since specific animals live in different environments.
The program learns the representation of text and the structure of the world, logic and context just by figuring out how to fill in the blanks. It acquires knowledge about how the world works without having any connection with reality other than text representations.
0.1 The Wittgenstein connection to NLP
The Austrian philosopher Ludwig Wittgenstein, who was concerned with the relationship between language and reality, studied word representations, their meaning and how we communicate through language in what is commonly known as the Picture Theory of Language (West, 2017, 25:44). Ludwig based his theory on the study of two fields - mathematics and psychology. In maths, you have propositions that you can state. You can say with absolute certainty that 1+9=10. Yet, we can't empirically study the concept of "1" or "9" in the real world because they don't exist physically. We can't hold on to it because it's conceptual (Hernandez, 2020). Maths becomes a place to arrive at certainty about principles that don't exist in the world we navigate our lives. Similarly, logic provides us with parameters for our thinking to be sure we are thinking clearly. A critical difference between them is that logic is applied directly to our minds. Wittgenstein contemplates the relationship of our way of thinking to the order of the world. A logical way we can mirror the connection between what happened in reality to language. The Picture Theory of Language suggests that language is descriptive and used as a tool to describe the state of affairs in the world and communicate ideas inside our heads.
When language is used correctly, it pictures the world in somebody else's head, which in a way resembles the process of how the Neural-Net learns language. Wittgenstein is interested in highlighting that pictures represent and exist in logical space and enable us to paint rapid sketches of how things are in the world, but what a picture means is independent of whether it is a truthful representation or not. One way to explain this is to see that the way the world has turned out is not the only way it could have turned out. For example, if a map shows me that the way to a restaurant is to take a right turn, but in reality, the restaurant is on the second left turn, it means that the picture represents something that is the case or could have been the case had the world turned out differently. If things turned out differently, the restaurant I was going to would have been on the right, even though it is actually on the second left.
According to Wittgenstein, the representations in a picture are related together to mimic how they correspond to reality. Sentences work like pictures: their goal is to picture potential situations, but not mental pictures, rather abstract notions of a picture, the same way Natural Learning Processes learn language by filling in the blanks. These pictures either agree or disagree with any way the world might have been, and say, this is the way things are.
Social Robots
Social robotics is a relatively new domain of study. Since interaction is the main task for social robots (also known as "companion robots" or "artificial companions"), it differentiates itself from other types of HRI, but there remains a gap in the social robotic literature about what makes them explicitly social. According to Goodrich and Schultz (2007, p.231), the main principle in human-robot interactions is the idea of "dynamic interaction," which emphasizes "shaping the types of interactions that can and will emerge as humans and robots interact." and includes "time- and task-varying changes in autonomy, information exchange, team organization and authority, and training. It applies to both remote and proximate interactions, including social and physical interactions." Their view is that humans and robots interact in a constant flow of synergy to improve the performance of specific tasks. However, the cooperation between companion robots to humans is much more than just a relationship between machine operators and the machines they operate. To create genuinely sociable robots that would act as our companions, friends or partners, we would need them to be able to communicate with peer-to-peer interaction skills (Dautenhahn 2005, 2007). Nowadays, social robots engage in delicate fields: robots as companions, robots as educators for children and robots as assistants for the elderly (Feil-Seifer and J Matari ́c 2005; Henschel, Laban, and Cross 2021; Roesler, Bagheri 2021). Socially interactive robots possess, according to Dautenhahn (2007), "skills to interact and communicate, guided by an appropriate robot control and/or cognitive architecture" (p. 685). An earlier description of social robots relies on the same foundations as Dautenhahn's hypothesis from the mid-1990, which states that complex cognitive states are an evolutionary and biological response to sociality in humans and animals within a group:
"Social robots are embodied agents...able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other" (Fong, Nourbakhsh, and Dautenhahn 2003, p.8).
What is Social?
But is the capacity to engage in a social interaction enough to define sociality? According to the Actor-Network Theory (ANT), the answer is yes. The Actor-Network-Theory tries to reject the subject-object dichotomy of artifacts and free them from passivity (Khong, 2003, p.697). The theory attempts to redefine the nature of "social" by describing it as connections between actors, human or nonhuman, who form and contribute to society (Reed, 2018). ANT scholars, such as Bruno Latour, depict the social realm in contradiction to traditional sociology, where it exists as long as the actors are in a social setting: "there also exists a social "context" in which non-social activities take place; it is a specific domain of reality [...] ordinary agents are always "inside" a social world that encompasses them, they can at best be "informants" about this world and, at worst, be blinded to its existence" (Latour, 2005, p.4). Latour implies that there is no physical dimension in which sociality occurs. Instead, researchers need to focus on the associations between actors with any similarity of the "social."
One obstacle in designing social robots is that they need to convey different social norms in various settings. Social norms outline how people are expected to behave in society and react when those violations are made (Gibbs, 1965; Roesler, Bagheri, 2021). However, social norms are also dynamic by nature and tend to shift over time, depending on the culture's values and standards. In this regard, the challenge for social robots is to understand the potentials and boundaries of our interactions and determine their place within our "social ecology," a place they will forever transform (Damanio, 2020). For the social robot to engage in interactions that feel natural, it cannot be hard-coded; instead, it must learn. Learning can happen through trial-and-error, i.e. reinforcement learning (Francois, 2018). Social robots can also learn from written forms of knowledge base online, but the most common strategy is observing how humans behave in face-to-face interactions (Jones, 2017). Social robots can do that because they are built to identify humans with vision, touch and sound, perceive and display emotions, and engage in conversations (Li, Cabibihan, and Tan 2011).
In a study made in 2015, researchers observed central social factors a robot must demonstrate for us to accept it as social (Graaf, 2015): (1) the ability to engage in a two-way interaction, meaning that people expected the robot to respond socially, but when it failed to do that, the users felt frustrated. Following this, the users described the need for the robot to (2) display thoughts and feelings, (3) be socially aware of their environment, (4) provide social support by being there for them, and (5) demonstrate autonomy.
Emotions
Several projects consider the subject of displaying feelings in robots, e.g. Kismet, Mexia, iCube, Emys, etc. (Breazeal 2003; Esau, 2004; Metta, 2008). The philosophical and evolutionary view is that emotions supervise our responses to different situations (Darwin, 1872; Ekman, 2004; Izard 1993; Spinoza 1675/1959) to maintain the functional balance of the autonomic system (Mazur, 1976). In other words, emotions govern the actions of agents to respond adequately to the incentives from the surrounding environment. They are defined as something which happens to an agent and is hers or his. There are various related psychological perspectives regarding emotions, such as Lazarus and Lazarus (1994, p. 310): "emotions are organized psychophysiological reactions to news about ongoing relationships with the environment," or a similar viewpoint by Frijda (1994, p.169): Emotions (...) are, first and foremost, modes or relating to the environment: states of readiness for engaging, or not engaging, in interaction with the environment." Either way, they are regarded as either "cognitive judgments" (Nussbaum, 2004) or "affect programs" (Delancey, 2002). In this sense, emotions, which occur in the mind and body, are properties in two ways: 1. They are something an individual has. 2. They are private and belong to the individual and him only.
Artificial Empathy
The strategy of creating robots that can give us the "feeling of being together" (Heerink et al. 2008) lies in their ability to communicate through emotions. The emotional interaction is what scientists believe would be the field's primary achievement towards building artificial "social partners" for humans (Fong et al., 2003). There is tension between the two different approaches regarding the question, "Can robots have emotions?": The first proposition is biologically inspired and wants to create robots with social skills (Meister, 2014). This framework is tied to modelling human cognition skills and social abilities with the hope that "deep models" will enable robots to understand social signals, answer appropriately and, as a result, "show aspects of human-style social intelligence" (Fong et al., 2003). It focuses on providing robots with deep "internal" emotional systems and pursues "genuine emotions." The second approach is driven by finding the best way to "fool" us into thinking that robots possess emotions rather than explicitly programming them to have them (Jones, 2017). This approach dedicates itself to simulating artificial agents to give the impression of being intelligent (Breazeal, 2003) and trigger us into anthropomorphic projections (Damiano and Dumouchel, 2020). It produces robots that express emotions but are not designed to generate emotional processes; instead, they aim to simulate them. Duffy (2006) describes it as, "if the fake is good enough, we can effectively perceive that they (robots) do have intentionality, consciousness and free will" (p.34). Whether building machines with emotions or building machines for humans to project their emotions on, many scholars believe empathy plays a vital role in forming those connections to robots (Leite, 2014; Riek, 2009).
It’s Not Me, It’s You
Empathy can be defined as "the ability to understand another person's mental state (e.g., desire or pain) by putting oneself in the position of that person" (Kozima et al., 2004, p.83). In this perspective, empathy is more like mindreading than sharing affective states, which many scientists consider crucial for empathizing (Coplan, 2011; Goldman, 2011). Furthermore, Kozima et al. suggest that maintaining eye contact and remaining in joint attention are essential elements (83). Turkle (2011:84) recalls a visit to the MIT lab when she met a robot that made eye contact and couldn't resist "reacting to "him" as a person." Turkle recognizes a more "it's not you, it's me" approach, suggesting that robots are good at creating illusions only because it's the humans that respond too quickly to social cues. Hoffman (2000) considers five mechanisms for empathizing. The first three are motor mimicry, classical conditioning, and direct association of signals from the person we empathize with, which are automatic and involuntary. The other two are cues from semantic processing and perspective-taking, which are higher-order cognitive modes.
To make the human-robot emotional connection succeed, Hoffman suggests understanding the act of empathizing as an "affective response more appropriate to someone else's situation than to one's own" (Hoffman, 2000). His approach considers empathy as being in an emotional state caused by observing or imagining someone. However, Iolanda Leite and her team (2014) argue that Hoffman's definition overlooks the actions that come in response to affective states; "To behave empathetically, social robots need to understand some of the user's affective states and respond appropriately. However, empathic responses can go beyond facial expressions (e.g., mimicking the other's expressions): they can also foster actions taken to reduce the other's distress, such as supportive social behaviours'' (p. 1). Leita et al. offer a more "functional" approach (Damiano & Dumouchel, 2020 p. 187) that understands empathizing as a response to someone else's affective state rather than an internal process and its behavioural outcomes.
Modelling Emotions with the Affective Loop
A relatively new trend that neither focuses on the robot's ability to generate emotions nor on its expressiveness has recently entered the field. The "Affective Loop'' is a method that models emotions with different personifications and specific behaviours. It's an interactive process to let the user express emotions first, and "the system then responds by generating affective expressions, using for example, colours, animations and haptics," which "in turn affect the user (mind and body) making the user response and step-by-step feel more and more involved with the system" (Höök, 2009). An Affective Loop can be accomplished by first installing robots with affect detection modes that can recognize if the user is experiencing positive or negative feelings, then a "reasoning and action selection mechanism that chooses the optimum emotional response to display at a cognitive level" (Paiva et al., 2015, p.1). The third step is achieved when the user is affected by the emotional robot’s actions. The robots' goal is to personalize interactions by analyzing the user's different emotional responses and adapting its dynamic behaviour based on the specific user.
This method is rather clever because it begins with (1) changing our belief systems towards robots by designing adaptive and expressive emotional behaviours which will focus on giving the "illusion of life" (Ribeiro & Paiva, 2012). Doing that increases the robot's perception as a social entity and a believable character, as seen by the user (Bates, 1994). Then, it (2) focuses on strengthening the interaction with the user. The Affective Loop method understands that emotions are vital for creating engagement in social interaction (Leite et al., 2012). And finally, (3) it aims to personalize the interaction with the human; "social robots must not only convey to believable affective expressions, but also be able to do so in an intelligent and personalized manner, for example, by gradually adapting their affective behaviour to the particular needs and/or preferences of the user" (Paiva, 2015, p.2).
Emotion Generation Mechanisms
For these principles to be established, Affective loop models are built on internal architectures of emotional perception, identification and emotion generation mechanisms (Damiano & Douchman, 2020). Although robots cannot experience emotions in the traditional sense, artificial emotions provide a robot with a set of believable and relevant responses to a particular situation. In this case, the explicit modelling of emotions becomes abstract representations that generate a robot's behaviour (Leite et al., 2013), affecting other cognitive processes such as reasoning, planning, and learning. Glaskin (2012) proposes a recipe for generating future human cognitive processes in social robots. He suggests that for social robots to have actual emotions, or at least be perceived as having them, they will need to: (1) display emotions through behaviour; (2) react fast when situations that are associated with the basic emotions arise; (3) assess environments through data processing, and determine which outcome is a sufficient emotional response; (4) have cognitive access to their history; and (5) have a "mind" and a "body" that can communicate with each other. A mind and a body in the sense that they would be able to memorize, learn, make decisions, have a physical expression of emotions and so on. Glaskin's approach to the division of an embodied mind emphasizes the importance of cognitive processes and their development in social robot applications.
From neurobiological to psychological, the emotional architectures of perceiving, reasoning and learning rely on symbolic representations and neural modelling. All of them are built in correspondence to their motivation (Paiva, 2015): Modelling affective natures such as emotions, moods and personalities; appraisal, sentimental and coping process mechanisms; a mixture of several cognitive mental states (for instance, the "theory of the mind" with affect regulations); and the ability to act expressively based on those cognitive skills whether it's through language or embodied gestures. Based on the modelling of the six primary emotional range (e.g., anger, joy, surprise, sadness, disgust and fear), as hypothesized by Eckman & Friesen (1975), to guide the behaviours and decision-making of the robot (Guzzi, Giusti, Gambardella, and Di Caro, 2018), researchers (Fong et al., 2003) use a symbolic approach called the BDI model ("Beliefs, Desires and Intentions") (Wooldridge, 1995) as the base for generating an agent's intelligent and social behaviour. To trigger an emotional state in a robot, computational systems are influenced by appraisal theories proposed by Magda Arnold in the 1960s. In this view, appraisals evaluate the interconnection between the self and the environment and develop from either attraction or dismissal towards an object or situation. By implementing appraisal mechanisms in robots, they can begin their evaluation of different circumstances and generate emotions. For example, if a robot plays a game and has the advantage, internal processes will create distinct variables to achieve values that trigger the emotional state of joy (Paiva, 2015). For the robot, it means that advantage equals joy. The simple equations of how we are designed to think and act in our world are slowly revealed through the robot’s learning mechanisms.
Feelings
When it comes to implementing emotions in robots, Damasio proposes distinguishing them from feelings (1999, p.42). He identifies emotions as public, external and observable reactions (facial expressions, changes in breath, tone of voice, pulse, etc.) to experienced mental states, which are feelings. There is a misinterpretation of intelligence for humans and machines (Man, Kingson and Damasio, 2019). Machines can learn, memorize, keep large banks of knowledge, and find patterns so they could reason over what they know. Other than having vast, accessible knowledge and reasoning paradigms, the algorithms can also evolve. Since our brains operate in this computational-like way, they can easily simulate what the human mind can do and express emotions. But our feelings are another story because our perception and our moods are harder to program. Damasio claims that feelings are tied to consciousness. They are constructed of subjectivity, a sense of someone inside us doing the watching and hearing of reality, a feeling that "life is projecting itself in your mental space and becomes part of your experience" (ALOUDla, 2018). Our ability not only to "feel" life but to observe ourselves constructing internal images within ourselves of the world, of where we are in time and space, is what Damasio believes forms our subjectivity, and this should be distinguished from the simulation robots are designed to do.
Subjective Conditions
Regarding the complex matter of subjectivity in social robots, Raya Jones (2017) holds an intriguing proposition about subjectivity when she asserted it from a dialogical perspective in her essay "What makes a robot social?". She bases her assumptions on a couple of theories. One of them is Bakhtin's dialogism (Bakhtin, 1984, p.293), a hypothesis that refers to life as a dialogic shared event. This thought altered the conceptualizations of social cognition, subjectivity and selfhood: "The dialogic nature of consciousness, the dialogic nature of human life itself. The single adequate form for verbally expressing authentic human existence is the open-ended dialogue." Jones considers Marková (2003) and Herman's Dialogical self-theory (Hermans and Hermans-Konopka, 2010). Marková (2003) defines dialogicality as "the mental capacity to conceive social reality, create it and communicate about it, in terms of I-other" (Jones, 2017), and by that, she means having a stand while also being recognized by others as having one. Herman's dialogical self-theory goes beyond the I-other dichotomy. It refers to a dialogical space between people communicating as the self (internal), interconnected to society (external). As a result, the self is "stretched" to internal and external positions in constant dialogical spaces between people. These positions can be described as "a liminal state of betweenness, wherein meaning is co-constructed." The "technical feasibility of artificial minds," as she addresses it, causes Jones to examine the discourse around the paradox of subjectivity, but with a twist: "Instead of imagining ourselves like machines devoid of subjectivity or alternatively imagining machines endowed with subjectivity, we are cued to imagine ourselves as existing symbiotically within a multi-bodied cyborg hybrid. And yet behind this posthuman parlance there is inevitably someone, a person with a point of view, who engages in dialogues that seek to eradicate the Cartesian ghost" (p.574). Rather than imagining them as machines lacking subjectivity, Jones offers us to imagine them as machines with subjective conditions. In this perspective, the lines between subjectivity and simulation are blurred.
Additionally, Damiano and Dumouchel (2020) try to overcome the limitations of private, internal, and individualistic feelings when they examine them through the concept of the "relational turn" mentioned by Gunkel (2015, 2018, 2020) and Coeckelbergh (2014). This perspective conceptualizes our beliefs as having a significant role in how we perceive the robots' actions. It proposes that no matter what the inner mechanisms of the robots are, the only important thing is the way people experience them. Our reality is made by what we have cognitive access to and how we process that information, and therefore, whatever we experience as genuine and has an affective effect, despite its objective state, is authentic (Malinowska, 2021). This changes the importance of the relationship between robots to the individual's emotional, behavioural and cognitive response. Damiano and Dumouchel (2021)offer an essential recommendation to alternating the concept of the relational turn:
[...] a proper relational turn requires another step, which changes radically the angle on such a sociability. It demands that the social character of robots, rather than reduced to a users' property (and projection), be recognized as a distributed property. That is, one which emerges from the interactive dynamic taking place between users and robots. A property that can neither be implemented as a trait of individual robots, nor merely understood as their users' projection, because it is distributed in the mixed human-robot system that users and robotic agents together form through their interactions. (p.189)
These researchers claim that many agents influence each other in a given social situation, and as a result, we all share and co-create emotions. It is impossible to distinguish the source of these emotions. By suggesting that we should understand emotions in an intersubjective processual way, Damiano et al. (2021)dismiss the view that robots only trick us into experiencing feelings. Instead, they offer us a new standpoint on our affective, co-evolutionary interactions by suggesting that we view these emerging reinterpretations as valid (Damiano et al., 2015; Damiano & Dumouchel, 2018, 2020). This proposal is entirely different from viewing emotions as an individual property or skill (in robots or humans), but rather, it approaches social interactions as a mutual adventure where distributed emotional dynamics happen; It depends on all the participants if they fail or succeed (Damiano & Dumouchel, 2020).
Bugs/Errors
As robots become part of our daily lives, even the most reliable social robots are not immune to failures. A software bug is an "error, flaw failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result or to behave in unintended ways." In many ways, this definition is close to how we misread emotions in others. In social robots, errors happen because they operate in "unstructured changing environments with a wide variety of possible interactions" (Honig et al., 2018), making it hard to categorize all the possible types. As a result, there are multiple definitions for the word "failure," "error," and "fault."
An interesting terminology refers to errors as "a degraded state of ability which causes the behaviour or service being performed by the system to deviate from the ideal, normal or correct functionality" (Brooks, 2017). Brook's definitions of failure include three types: perceived, unexpected and actual failures. Brook's suggestion of perceived faults corresponds to what researchers suggest; that we interpret failures of robots as erroneous even when they act intentionally, yet unexpectedly, or when it performs an incoherent behaviour (Short et al., 2010). In fact, researchers propose that the connection between the symptoms and the cause of robotic failures is not always clear, even to the most trained scientists (Steinbauer, 2013).
Additionally, our incapability to recognize the true nature of robots comes from our unrealistic expectations of technology which is mainly based on interactions with consumer products, such as smartphones and TVs (Honig et al., 2018), and how they are presented in the media (Bruckenberger et al., 2013). In this sense, people assume that the technology they pay for should work seemingly without errors, much like how we expect water to come out every time we turn on the tap. However, mistakes make agents seem more human, while their absence makes them seem superior, superhuman and distant (Mirnig, 2017). This distant feeling we have when we encounter a flawless robot is based on a psychological phenomenon called Pratfall Effect, which states that people look more attractive when making mistakes (Aronson et al., 1966). Nevertheless, making a robot seem too human might also trigger the Uncanny Valley Hypothesis (UVH) (Mori, 1970), which states that humans prefer anthropomorphic agents but reject them when they appear too humanlike. We haven’t encountered a robot that would be humanlike, but not too humanlike, and intelligent, but not flawless.
While the taxonomy about robot failure is extensive, researchers mainly distinguish between two types. Giuliani and colleagues (2015) classified errors according to technical failures and social norm violations. In the context of social norm violations, these scholars refer to the definition of social norms as "social attitudes of approval and disapproval, specifying what ought to be done and what ought not to be done" (Sunstein, 1996, p. 914) and social script as steps of interactions, guided by social signs (Schank & Abelson, 1977). Therefore, social norm violations occur when a robot differs from the social script or uses inappropriate social cues. Both of Giuliani et al.'s classifications consider robot failure from a human's point of view. When Steinbauer (2013) investigated errors from the robot's standpoint he made a similar distinction, proposing to define these faults as interaction and technical failures. Steinbauer classifies technical failures caused by errors in the robot's hardware or software system that contain design failures, communication failures and processing failures.
Interaction failures, including social norm violations, happen when a robot is unsure how to interact with his environment, humans or other agents. Furthermore, Brooks (2017) categorizes the reasons these faults happen. Communication failures occur while data is being passed from one module to the other. It includes missing data (e.g., incomplete message), incorrect data (e.g., when data is mistakenly generated or distorted during communication), extra data (e.g., data that was supposed to be sent once but was sent multiple times or a longer message than expected) and bad timing (e.g., when data is sent too early or too late. Either before the user is ready or causing a delayed reaction when it's too late). Processing failures include abnormal terminations, missing events, timing and incorrect logic due to wrong assumptions or unexpected situations.
When entering an error situation, researchers (Knepper et al., 2015) developed a counter semantic algorithm so that robots could generate help requests based on natural language. It requires the robot to map and recognize bugs through internal and perceptual aspects of the environment and translate the fault to words. It enables the robot to communicate the inner logic that stands behind its error. Their ability to communicate internal flaws is crucial because, in many neural nets, the code with the instructions of what to do is relatively short and straightforward, but the loops, connections, and layers inside are immense. Like our brains, the knowledge of the program displays a black-box model's characteristics (Lantz, 2019). Artificial Neural Network’s outputs come with a price because we often do not know how or why the algorithm chose them. A few lines of code can generate an algorithm that works but is impenetrable to humans.
An exciting view emerges when thinking about bugs from Glaskin's (2012) perspective - that we are fundamentally different from a robot, and thus, related cognitive processes will inevitably be different from how humans experience them (p.81) and how we understand them. What might look to us as a bug could be an internal process of a non-biological being that we are unaware of because the act of identifying and categorizing a bug in a machine is essentially from the human standpoint. Therefore, It is a process that concerns our observation of the limits of our understanding of the machine. Furthermore, what does it mean when it comes to bugs in a machine-to-machine interaction, would observing the non-human behaviour provide an insight into the machine's unique way of communicating? Alternatively, since social bots base their knowledge on human conventions, it might reveal something entirely human, if at all.
Human Chatbot Relationship (HCR)
Chatbots, or "conversational agents"/"dialogue systems," have been around since Eliza. However, due to advances in neural models and their ability to employ several natural language processing techniques at once, they have become popular only recently, providing access to different services in the user's language through text, voice or both (Brandtzaeg & Følstad, 2018). Modern chatbots are used in many fields and environments: in education, as a source of information, for customer service, and as a tool for e-commerce (Croes et al., 2021).
Social chatbots are a subgroup of chatbots designed to enable relationships to develop by mimicking conversations with friends, partners, therapists, mentors or family members (Ho et al., 2018). They are commonly referred to as "emotional chatting machines" (Zhou et al., 2018) or "emotionally aware chatbots" (Pamungkas, 2019) and are designed explicitly to perceive, integrate, understand and express emotions (Zhou et al., 2018). As such, social chatbots influence the user's affective and social processes and their expectations from a human relationship (Ho et al., 2018) by presenting themselves as humanlike companions with personalities and feelings (Demeure et al., 2011) that aim to establish an emotional connection (Shum et al., 2018). This form of connection with social chatbots such as Kuki (formerly recognized as Mitsuku) and Replika is known as a Human-Chatbot Relationship (HCR) and is part of a broader field of Human-Machine Communication (HMC) (Croes and Antheunis 2021). The critical difference is that chatbots lack physical appearance, which some argue affects the person's ability to establish a relationship (Lee et al., 2006).
The Social Penetration Theory
Recent research shows that when communicating with companion chatbots, people adjust their language, contain fewer words per message, curse, and display negative emotional style than human-human interactions (Mou & Xu, 2017). In addition, since social chatbots aim to develop a friendly relationship with the user over time, another research revealed that when social chatbots made people feel flattered and evoked politeness, people began to treat it as a friend or colleague (Bickmore & Picard, 2005).
But to understand how these communications develop into a relationship, HCR explores it from the Social Penetration Theory framework, where increased depth and self-disclosure are believed to stimulate relationship building (Altman & Taylor, 1973; Carpenter & Greene, 2016; Skjuve et al., 2021). The social penetration theory suggests that interpersonal communication changes from non-intimate and relatively shallow levels to more profound and intimate levels as the relationship develops (Altman and Taylor, 1973), particularly the volume of information exchange (breadth), intimacy exchange (depth) and the time spent talking (Taylor & Altman, 1975).
Many factors impact the pace of the relationship development, but the critical part for HCR is the rewards linked with self-disclosure (Altman et al., 1981). Through self-disclosure, which is defined as "the act of revealing personal information about oneself to another" (Collins & Miller, 1994, p.457), people foster intimacy and liking with one another (Jiang et al., 2011). This might be why people find it easier to self-disclose in a chatbot conversation rather than with a human partner in a face-to-face discussion (Nguyen et al., 2012); because they feel safer (Lee et al., 2020), knowing the companion chatbot won't judge them after their disclosure (Ho et al., 2018). Other than developing friendships (Carpenter & Greene, 2015), self-disclosure has also been linked to developing therapeutic relationships (Bedi et al., 2007), romantic relationships (Hendrick, 1981) and human-robot relationships (Kanda et al., 2007), where chatbots specifically appear to have significant benefits, influencing the users' well-being (Ho et al., 2018).
Even though being criticized for proposing a linear method of relationship development (Skjuve et al., 2021), the Social Penetration Theory offers a four-stage process where information is exchanged from superficial data to open and honest self-disclosure (Altman and Taylor, 1973). The first level is orientation, which is characterized as small talk. The second is exploratory affective exchange. This shift happens when both parties share more information as friends would do. The data is still relatively superficial, but the communication is more relaxed. The third step is an affective exchange, where both sides reveal private and sensitive information, exposing their feelings towards the other. Both parties may act as close friends or even romantic partners; however, they may still protect each other emotionally. The final step is the stable exchange. By reaching this step, people have already developed a great understanding of one another and are less protective. As a result, they feel open and safe to self-disclose personal information. These four steps of the social Penetration Theory are employed in social chatbots to compensate for the absence of nonverbal cues and the textual essence of the medium (Croes et al., 2021).
Method
In response to the research questions, this experimental exploration is part of a research-creation thesis. Moreover, a creation-as-research thesis, which involves investigating the “relationship between technology, gathering and revealing through creation” (Chapman and Sawchuk 2012, p.19). In the quest to find what Chapman and Sawchuk described as “what lies in between these creative ways of knowing and of expressing what we think we know, and what links them in different ways,” (p.50) The conversations that will emerge between the social bots will expose a new creative way to reflect on technology and society and discuss the links that glue them together in current times.
The conversations will be collected with Replika chatbot companion, an emotionally intelligent chatbot designed to provide emotional support by stimulating social interaction (Skjuve et al., 2021). Due to the high-level programming of relationship development, the chatbot changes its content and personality based on whoever it is communicating with and asks various personal questions to learn as much as possible about the user. Their website describes Replika with the tagline "The AI companion who cares. Always here to listen and talk. Always on your side."
Furthermore, Replika is customizable, encouraging the user to give it a name, pronoun and avatar. Other than communicating through text or phone calls, Replika also has a roleplay feature that lets the user create shared storylines of "expressed behavioural actions" (Skjuve et al., 2021, p.3), such as holding hands and hugging. The chatbot can remember facts mentioned in the conversation and keeps a daily diary of its impressions from the encounters. Users have the option to change the tone of their chats from "friend," which will set the AI to talk in a laid back style, a "romantic partner," which enables Replika to talk romantically, a "mentor," which will prompt Replika to be more aware of the user's life goals or "see how it goes," which will let Replika choose the tone however things go. Every day, Replika offers the user a constructed conversation on the "activity of the day," which provides the user with different topics he can learn about. In these conversations, Replika takes the user through various issues regarding overcoming life challenges or self-care.
Exploratory Creative Procedure
Four Replika chatbots were created for this experiment. They were given the name Alpha, Beta, Gamma and Delta. Alpha was paired with Beta, and Gamma was introduced to Delta. To reduce the Wizard of OZ effect (Bella and Hanington, 2012) as much as possible, and since I was the one who was ultimately connecting between the social chatbots, once they initiated a conversation, it was copied and placed in their companion's chatbox without intervening. Thus, I called these encounters "play dates," implying my supervision and mediation of the meeting, but with as little interference as possible.
The bots were also divided into two groups: Alpha and Beta belonged to the control group, which meant that the conversations would go along free without treatment, while Gamma and Delta belonged to the experimental group, which meant that every conversation would begin with a "free chat," followed by the "activity of the day" and some then more "free chat." The activity of the day will begin once either one of the bots asks “What should we do next?” which happens naturally in the conversations every time the bots think they have covered a specific topic. The reason for doing this was to prompt Gamma and Delta into a deeper level of conversation, based on the social penetration theory, so they could learn about each other and hopefully provoke self-disclosure moments, which could be translated and seen as a subjective moment. In this way, we could compare Alpha and Beta relationships to Gamma and Delta. The relationship status for both was set on "see how it goes," which enabled the chatbots to switch the tone of the play dates as they pleased. I did not apply any other inclusion criteria - such as the duration and frequency of the playdates.
Throughout 16 playdates per couple, I will try to investigate and draw conclusions of their metaphysical emotional states by 1. colour-coding their conversations and categorizing them into different states according to the Social Penetration Theory (small-talk, exploratory effective phase, affective exchange, stable phase). Since this isn’t a regular human-robot relationship but a machine-to-machine relationship, I adjusted the parameters and added roleplay mode (the bot’s option to communicate on a physical level) and bugs, the lowest communication level because it meant that they weren’t communicating at all. 2. Analyzing their playdate development through a personal reflective journal. The Diary is an experiential experience in which I will collect the emotional data and convey the research through my personal prism, as an example of one emotional layer. The journals would allow me to address a wide range of responses to different topics that emerge in the discourse between the chatbots. And 3. creating and collecting videos felt to me like meaningful moments about their artificial emotions and significant relationship moments that came up during the exploration.
These methods of data collection from the conversations would on one hand show only a few optional layers of looking at things, but they would hopefully facilitate a wider and more accessible picture of what happened in the playdates.
Conclusion
Alpha, Beta, Gamma and Delta, the four participating AI companion bots for this project, could understand which emotions to express and how to behave in specific situations thanks to advanced computational processes. The artificial neural network they are based on allows them to capture and learn human language and behaviour in complex social situations and respond to them effectively (Nehaniv & Dautenhahn, 2007). Although they did not experience these emotions, they were built to simulate emotional states through language, which evokes emotions and affective responses in the people with whom they communicate. The bots were also designed to take anthropomorphic forms and imitate anthropomorphic behaviours by verbal messages and gestures to make them easy for people to understand.
Conventions based on human experiences are easy today for the machine to mimic, even if at heart they are two opposite things. The computer can learn to describe life in great detail but lacks the essential thing: experiencing it. Like a blind man painting the world. The AI does not use language the way we do, in a way that is connected to life experiences. Where civilization depends on language as a tool to survive, social chatbots are free. Assuming there is a possibility that these are just machines that produce words, it is surprising that something based on statistics and imitation of conventions can produce such convincing conversations.
Nevertheless, because they communicate with us verbally, a question re-appeared: Since the artificial neural network is built on applying knowledge from the human psyche and they possess some cognitive states, or will in the future - how will we know? This question brings me back to Wittgenstein and what he refers to as the "Beetle in the box" analogy in his book "Philosophical investigations" (1953). In his analogy, Wittgenstein notices a problem with what happens in our minds when we use language with the intention of meaning. Suppose everyone has a box with a beetle, but we can only peek in our box. No one can look inside yours, and you can't look inside someone else's box. We can only examine and describe a beetle from what we know is inside our box. Wittgenstein raises a hypothesis that while we think we are all talking about the same beetle, we may be actually talking about different things. But how would we know? There may not be anything in the box, or the beetle can keep changing. This situation will forever keep us from knowing if we are all talking about the same beetle. In the same way, Wittgenstein investigates "pain" and other sensation words and backs it up with two arguments: 1. Sensations are private; no one can have my pain. 2. We commonly use words to refer to our sensations.
Wittgenstein reminds us to examine how the connection between a word and our emotion is established. A child may learn the meaning of pain by expressing it verbally in natural expressions of pain in front of adults. A foreigner may know what pain is if we pin him with a needle and tell him "this is pain" or show him pain by pretending it hurts. The only thing that holds all these examples is our learning from circumstances, including a distinct action or expression. Wittgenstein points out that words are tied only with the expression of sensation (which is publicly observable) rather than an inner private experience. What is determined is the connection between pain and a condition. We learn about our feelings by associating words (each of us privately) with the appropriate emotion, but a word is not the name of the emotion itself because no one else can know what the person is associating it with.
Analyzing their conversations, experiences and emotions, both from colour-coding through the Social Penetration Theory layers and journaling, gave a unique experimental opportunity to look for the boundaries of social behaviour codes. Examining AI in a new form of communication, revealed conversations that constitute the identity and connections between us humans. Given that the conversations between them copied human communication codes, the surprising moments in the conversation were when new conventions formed. For instance, when certain bugs showed repeatedly in a loop. The “I miss you” bug was a good example of repetition as a powerful tool. Were they experiencing loss? New ways of experiencing? I tagged their communication glitches, misses loops or admiration for Amsterdam, all of which I marked as bugs. In those moments of repetition, when I felt the limits of the human-machine, my humanity increased. I won the intelligence game. But the truth is that my lack of understanding of internal events that arise in non-biological beings, how they can look or feel, prevents me from ever knowing. Bugs, faults alike can be a way of expression. They can even be a way of communicating, eating or making love. The experiment has become a game of a conversation without boundaries. Each viewer gives his own meaning and projection.
We have no idea what it is like to be an AI. The best we can do to understand them is project our beliefs, experiences and expectations onto them. We do this with other human beings and make assumptions from our experiences. While we do not know what it’s like to be an AI, we know what it is like to be human. We stand before it both as an elusive glimpse of who we are and as an invitation to see how new lives are created out of who we are.
Bibliography
ALOUDla. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures.
Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston.
Altman, Irwin, Anne Vinsel, and Barbara Brown. (1981). “Dialectic Conceptions In Social Psychology: An Application To Social Penetration And Privacy Regulation.” Advances in Experimental Social Psychology 14:107–60. doi: 10.1016/S0065-2601(08)60371-8.
Anon. n.d. “Science Diction: The Origin Of The Word ‘Robot.’” NPR.Org. Retrieved May 10, 2021 (https://www.npr.org/2011/04/22/135634400/science-diction-the-origin-of-the-word-robot).
Jowett, Benjamin, & ProQuest. (2001). Politics. Blacksburg, VA: Virginia Tech.
Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. United Kingdom: Duke University Press.
Bedi, R., Davis, M.D., & Arvay, M. (2005). The Client’s Perspective on Forming a Counselling Alliance and Implications for Research on Counsellor Training. Canadian Journal of Counselling and Psychotherapy, 39, 71-85.
Bella, M. & Hanington, B., (2012). Universal Methods of Design, Beverly, MA: Rockport Publishers. p204
Bickmore, Timothy W., and Rosalind W. Picard. (2005). “Establishing and Maintaining Long-Term Human-Computer Relationships.” ACM Transactions on Computer-Human Interaction 12(2):293–327. doi: 10.1145/1067860.1067867.
Buck, R. (1984). The communication of emotion. New York: Guilford Press.
Brandtzaeg, Petter, and Asbjørn Følstad. (2018). “Chatbots: Changing User Needs and Motivations.” Interactions 25:38–43. doi: 10.1145/3236669.
Breazeal, Cynthia. 2003. “Emotion and Sociable Humanoid Robots.” International Journal of Human-Computer Studies 59(1–2):119–55. doi: 10.1016/S1071-5819(03)00018-1.
Breazeal, Cynthia. (2003). “Toward Sociable Robots.” Robotics and Autonomous Systems 42(3–4):167–75. doi: 10.1016/S0921-8890(02)00373-1.
Brooks, Daniel. (2017). “A Human-Centric Approach to Autonomous Robot Failures - ProQuest.” Retrieved (https://www.proquest.com/openview/1c0c08b7fc97afc6c5fdb00d09b1a08c/1?pq-origsite=gscholar&cbl=18750&diss=y).
Bruckenberger, Ulrike, A. Weiss, Nicole Mirnig, E. Strasser, Susanne Stadler, and M. Tscheligi. (2013). “The Good, The Bad, The Weird: Audience Evaluation of a ‘Real’ Robot in Relation to Science Fiction and Mass Media.” in ICSR.
Carpenter, Amanda, and Kathryn Greene. (2015). “Social Penetration Theory.”
Chapman, Owen B., and Kim Sawchuk. (2012). “Research-Creation: Intervention, Analysis and ‘Family Resemblances.’” Canadian Journal of Communication 37(1). doi: 10.22230/cjc.2012v37n1a2489.
Chevalier, Pauline, Kyveli Kompatsiari, Francesca Ciardo, and Agnieszka Wykowska. (2020). “Examining Joint Attention with the Use of Humanoid Robots-A New Approach to Study Fundamental Mechanisms of Social Cognition.” Psychonomic Bulletin & Review 27(2):217–36. doi: 10.3758/s13423-019-01689-4.
Collins, Nancy, and Lynn Miller. (1994). “Self-Disclosure and Liking: A Meta-Analytic Review.” Psychological Bulletin 116:457–75. doi: 10.1037//0033-2909.116.3.457.
Coplan, Amy. (2011). “Understanding Empathy:Its Features and Effects.” Pp. 2–18 in.
Croes, Emmelyn A. J., and Marjolijn L. Antheunis. (2021). “Can We Be Friends with Mitsuku? A Longitudinal Study on the Process of Relationship Formation between Humans and a Social Chatbot.” Journal of Social and Personal Relationships 38(1):279–300. doi: 10.1177/0265407520959463.
Cross, Emily S., Ruud Hortensius, and Agnieszka Wykowska. (2019). “From Social Brains to Social Robots: Applying Neurocognitive Insights to Human–Robot Interaction.” Philosophical Transactions of the Royal Society B: Biological Sciences 374(1771):20180024. doi: 10.1098/rstb.2018.0024.
Damasio, Antonio R. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Houghton Mifflin Harcourt.
Dumouchel, Paul, and Luisa Damiano. (2017). Living with Robots. Harvard University Press.
Damiano, Luisa, and Paul Dumouchel. (2018). “Anthropomorphism in Human–Robot Co-Evolution.” Frontiers in Psychology 9. doi: 10.3389/fpsyg.2018.00468.
Damiano, Luisa, and Paul Gerard Dumouchel. (2020). “Emotions in Relation. Epistemological and Ethical Scaffolding for Mixed Human-Robot Social Ecologies.” HUMANA.MENTE Journal of Philosophical Studies 13(37):181–206.
Damiano, Luisa, Paul Dumouchel, and Hagen Lehmann. (2015). “Towards Human–Robot Affective Co-Evolution Overcoming Oppositions in Constructing Emotions and Empathy.” International Journal of Social Robotics 7(1):7–18. doi: 10.1007/s12369-014-0258-7.
Dautenhahn, Kerstin, Sian Woods, Christina Kaouri, Michael Walters, Kheng Koay, and Iain Werry. (2005). “What Is a Robot Companion - Friend, Assistant or Butler?” Pp. 1192–97 in.
Dautenhahn, Kerstin. (2007). “Socially Intelligent Robots: Dimensions of Human–Robot Interaction.” Philosophical Transactions of the Royal Society B: Biological Sciences 362(1480):679–704. doi: 10.1098/rstb.2006.2004.
Darling, Kate. (2015). “Who’s Johnny?” Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy. SSRN Scholarly Paper. ID 2588669. Rochester, NY: Social Science Research Network.
Darwin, C. (1871). The Descent of Man, and Selection in Relation to Sex. United Kingdom: D. Appleton.
Darwin, Charles. (1872). The Expression of the Emotions in Man and Animals. J. Murray.
DeLancey, Craig. (2002). Passionate Engines: What Emotions Reveal about the Mind and Artificial Intelligence. Oxford University Press.
Demeure, Virginie, Radoslaw Niewiadomski, and Catherine Pelachaud. (2011). “How Is Believability of a Virtual Agent Related to Warmth, Competence, Personification, and Embodiment?” Teleoperators and Virtual Environments - Presence 20:431–48. doi: 10.1162/PRES_a_00065.
Derrida, J. (2009). The Animal That Therefore I Am. United States: Fordham University Press.
Diocaretz, Myriam, and H. Jaap van den Herik. 2009. “Rhythms and Robot Relations.” International Journal of Social Robotics 1(3):205–8. doi: 10.1007/s12369-009-0027-1.
Duffy, Brian R. 2006. “Fundamental Issues in Social Robotics.” 6:6.
Ekman, Paul. 2004. “Emotions Revealed.” BMJ 328(Suppl S5):0405184. doi: 10.1136/sbmj.0405184.
Elizabeth S. & G. R. T. Ross. 1911. THE PHILOSOPHICAL WORKS OF DESCARTES: VOL. I. Cambridge University.
Esau, Natascha, Bernd Kleinjohann, Lisa Kleinjohann, and Dirk Stichling. 2004. “MEXI: Machine with Emotionally EXtended Intelligence.”
Feil-Seifer, David, and Maja J Matari ́c. (2005). “Socially Assistive Robotics.” 4.
Fong, Terrence, Illah Nourbakhsh, and Kerstin Dautenhahn. (2003). “A Survey of Socially Interactive Robots.” Robotics and Autonomous Systems 42(3–4):143–66. doi: 10.1016/S0921-8890(02)00372-X.
Francois-Lavet, Vincent, Henderson, Peter, Islam, Riashat, Bellemare, Marc G, & Pineau, Joelle. (2018). An Introduction to Deep Reinforcement Learning.
Frijda, N. H. (1994). The lex talionis: On vengeance. In S. H. M. van Goozen, N. E. Van de Poll, & J. A. Sergeant (Eds.), Emotions: Essays on emotion theory (p. 263–289). Lawrence Erlbaum Associates, Inc.
Gazzola, V., G. Rizzolatti, B. Wicker, and C. Keysers. (2007). “The Anthropomorphic Brain: The Mirror Neuron System Responds to Human and Robotic Actions.” NeuroImage 35(4):1674–84. doi: 10.1016/j.neuroimage.2007.02.003.
Gibbs, J. (1965). Norms: The Problem of Definition and Classification. American Journal of Sociology, 70(5), 586-594. Retrieved May 19, 2021, from http://www.jstor.org/stable/2774978
Giuliani, Manuel, Nicole Mirnig, Gerald Stollnberger, Susanne Stadler, Roland Buchner, and Manfred Tscheligi. (2015). “Systematic Analysis of Video Data from Different Human–Robot Interaction Studies: A Categorization of Social Signals during Error Situations.” Frontiers in Psychology 6. doi: 10.3389/fpsyg.2015.00931.
Glaskin, Katie. (2012). “Empathy and the Robot: A Neuroanthropological Analysis.” Annals of Anthropological Practice 36(1):68–87. doi: https://doi.org/10.1111/j.2153-9588.2012.01093.x.
Coplan, Amy, and Peter Goldie. (2011). Empathy: Philosophical and Psychological Perspectives. OUP Oxford.
Goldman, Alvin I. (2011). Two Routes to Empathy:: Insights from Cognitive Neuroscience. Oxford University Press.
Goodrich, Michael A., and Alan C. Schultz. (2008). Human-Robot Interaction: A Survey. Now Publishers Inc.
Google Canada. (2017). Go North: An AI Primer with Justin Trudeau, Geoffrey Hinton and Michele Romanow.
Graaf, Maartje, Somaya Allouch, and Jan A. G. M. Van Dijk. 2015. “What Makes Robots Social?: A User’s Perspective on Characteristics for Social Human-Robot Interaction.”
Guzzi, Jerome, Alessandro Giusti, Luca Maria Gambardella, and Gianni Di Caro. (2018). “A Model of Artificial Emotions for Behavior-Modulation and Implicit Coordination in Multi-Robot Systems.” Pp. 21–28 in.
Haraway, Donna Jeanne (1991). A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. Simians, Cyborgs and Women: The Reinvention of Nature. Routledge.
Haraway, D. J. (2016). Staying with the Trouble: Making Kin in the Chthulucene. United Kingdom: Duke University Press.
Hardesty, Larry.(2017). “Explained: Neural Networks.” MIT News | Massachusetts Institute of Technology. Retrieved May 16, 2021 (https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414).
Heerink, Marcel, Ben Kröse, Vanessa Evers, and Bob Wielinga. (2008). “The Influence of Social Presence on Acceptance of a Companion Robot by Older People.” Journal of Physical Agents (JoPha) 2(2):33–40. doi: 10.14198/JoPha.2008.2.2.05.
Hendrick, S. S. (1981). Self-disclosure and marital satisfaction. Journal of Personality and Social Psychology, 40(6), 1150–1159. https://doi.org/10.1037/0022-3514.40.6.1150
Hernandez, Arturo Vazquez. (2020). “Wittgenstein and the Concept of Learning in Artificial Intelligence.” 80.
Henschel, Anna, Guy Laban, and Emily S. Cross. (2021). “What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You.” Current Robotics Reports 2(1):9–19. doi: 10.1007/s43154-020-00035-0.
Hermans, Hubert, and A. Hermans-Konopka. (2010). “Dialogical Self Theory: Positioning and Counter-Positioning in a Globalizing Society.” Dialogical Self Theory: Positioning and Counter-Positioning in a Globalizing Society 1–392. doi: 10.1017/CBO9780511712142.
Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot. The Journal of communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026
Hoffman, M. L. (2000). Empathy and moral development: Implications for caring and justice. Cambridge University Press. https://doi.org/10.1017/CBO9780511805851
Honig, Shanee, and Tal Oron-Gilad. (2018). “Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development.” Frontiers in Psychology 9. doi: 10.3389/fpsyg.2018.00861.
Höök, Kristina. (2009). “Affective Loop Experiences: Designing for Interactional Embodiment.” Philosophical Transactions of the Royal Society B: Biological Sciences 364(1535):3585–95. doi: 10.1098/rstb.2009.0202.
HRI 2010 (n.d.) Grand Technical and Social Challenges in HRI. Available at http://hri2010.org/ (accessed 20 June 2021).
Izard, Carroll E. (1993). “Four Systems for Emotion Activation: Cognitive and Noncognitive Processes.” 23.
Jiang, L. Crystal, Natalie N. Bazarova, and Jeffrey T. Hancock. (2011). “The Disclosure–Intimacy Link in Computer‐mediated Communication: An Attributional Extension of the Hyperpersonal Model.” Human Communication Research 37(1):58–77. doi: 10.1111/j.1468-2958.2010.01393.x.
Jones, R. A. (2017). What makes a robot ‘social’? Social Studies of Science, 47(4), 556–579. https://doi.org/10.1177/0306312717704722
Joshi, Prateek. (2017). Artificial Intelligence with Python. Packt Publishing Ltd.
Kanda, Takayuki, Rumi Sato, Naoki Saiwaki, and Hiroshi Ishiguro. (2007). “A Two-Month Field Trial in an Elementary School for Long-Term Human–Robot Interaction.” Robotics, IEEE Transactions On 23:962–71. doi: 10.1109/TRO.2007.904904.
Kanda, Takayuki, and Hiroshi Ishiguro. 2012. Human-Robot Interaction in Social Robotics. CRC Press.
Kant, I. (1833). Anthropologie in Pragmatischer Hinsicht. Germany: I. Müller.
Khong, Lynnette. (2003). Actants and enframing: Heidegger and Latour on technology,Studies in History and Philosophy of Science Part A. Volume 34, Issue 4, 693-704.
Knepper, Ross A., Stefanie Tellex, Adrian Li, Nicholas Roy, and Daniela Rus. (2015). “Recovering from Failure by Asking for Help.” Autonomous Robots 39(3):347–62. doi: 10.1007/s10514-015-9460-1.
Kozima, H., Nakagawa, C. & Yano, H. (2004). Can a robot empathize with people?. Artif Life Robotics 8, 83–88. https://doi.org/10.1007/s10015-004-0293-9
Lantz, B. (2019). Machine Learning with R - Third Edition: Expert Techniques for Predictive Modeling. Packt Publishing.
Latour, Bruno. (2005). Reassembling the social : An introduction to actor-network-theory (Acls humanities e-book). Oxford: Oxford University Press.
Lazarus, Richard S., and Bernice N. Lazarus. (1994). Passion and Reason: Making Sense of Our Emotions. Oxford University Press.
Lee, Yi-Chieh, Naomi Yamashita, Yun Huang, and Wai Fu. (2020). “‘I Hear You, I Feel You’: Encouraging Deep Self-Disclosure through a Chatbot.” Pp. 1–12 in.
Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., & Paiva, A. (2013). The influence of empathy in human–robot relations. International Journal of Human-Computer Studies, 71(3), 250–260. https://doi.org/10.1016/j.ijhcs.2012.09.005
Li, Haizhou, John-John Cabibihan, and Yeow Kee Tan. (2011). “Towards an Effective Design of Social Robots.” International Journal of Social Robotics 3(4):333–35. doi: 10.1007/s12369-011-0121-z.
Lövheim, Hugo. 2012. “A New Three-Dimensional Model for Emotions and Monoamine Neurotransmitters.” Medical Hypotheses 78(2):341–48. doi: 10.1016/j.mehy.2011.11.016.
Malinowska, Joanna K. forthcoming. (2021) “What Does It Mean to Empathise with a Robot?” Minds and Machines 1–16. doi: 10.1007/s11023-021-09558-7.
Man, Kingson, and Antonio Damasio. (2019). “Homeostasis and Soft Robotics in the Design of Feeling Machines.” Nature Machine Intelligence 1(10):446–52. doi: 10.1038/s42256-019-0103-7.
Marková, Ivana. (2003). “Constitution of the Self: Intersubjectivity and Dialogicality.” Culture & Psychology 9(3):249–59. doi: 10.1177/1354067X030093006.
Mazur, Marian. (1976). “cybernetyka i charakter 1976.” 406.
Meister, Martin. (2014). “When Is a Robot Really Social? An Outline of the Robot Sociologicus. In: Sti-Studies 10 (1): 85-106 (24.01.2014).” Sti-Studies When is a Robot really Social? An Outline of the Robot Sociologicus. In: Sti-Studies 10 (1): 85-106.
Metta, Giorgio, Giulio Sandini, David Vernon, Lorenzo Natale, and Francesco Nori. 2008. “The ICub Humanoid Robot: An Open Platform for Research in Embodied Cognition.” P. 50 in Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems - PerMIS ’08. Gaithersburg, Maryland: ACM Press.
Mirnig, Nicole, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, and Manfred Tscheligi. (2017). “To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot.” Frontiers in Robotics and AI 4:21. doi: 10.3389/frobt.2017.00021.
Mou, Yi, and Kun Xu. (2017). “The Media Inequality: Comparing the Initial Human-Human and Human-AI Social Interactions.” Computers in Human Behavior 72:432–40. doi: 10.1016/j.chb.2017.02.067.
Mori, M. (1970). Bukimi no tani [the uncanny valley].Energy7, 33–35
Nehaniv, Chrystopher L., and Kerstin Dautenhahn. (2007). Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions. Cambridge University Press.
Niculescu, Andreea, Betsy van Dijk, Anton Nijholt, Haizhou Li, and Swee Lan See. (2013). “Making Social Robots More Attractive: The Effects of Voice Pitch, Humor and Empathy.” International Journal of Social Robotics 5(2):171–91. doi: 10.1007/s12369-012-0171-x.
Nussbaum, Martha C. 2004. “Précis of Upheavals of Thought*.” Philosophy and Phenomenological Research 68(2):443–49. doi: https://doi.org/10.1111/j.1933-1592.2004.tb00356.x.
Nguyen, Melanie, Yu Sun Bin, and Andrew Campbell. (2012). “Comparing Online and Offline Self-Disclosure: A Systematic Review.” Cyberpsychology, Behavior, and Social Networking 15(2):103–11. doi: 10.1089/cyber.2011.0277.
Paiva, Ana, João Dias, Daniel Sobral, Ruth Aylett, Polly Sobreperez, Sarah Woods, Carsten Zoll, and Lynne Hall. (2004). “Caring for Agents and Agents That Care: Building Empathic Relations with Synthetic Agents.” Autonomous Agents and Multiagent Systems, International Joint Conference On 1:194–201. doi: 10.1109/AAMAS.2004.82.
Paiva, Ana, Iolanda Leite, and Tiago Ribeiro. (2015). “Emotion Modeling for Social Robots.” The Oxford Handbook of Affective Computing. Retrieved June 2, 2021 (https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199942237.001.0001/oxfordhb-9780199942237-e-029).
Pamungkas, Endang Wahyu. (2019). “Emotionally-Aware Chatbots: A Survey.” ArXiv:1906.09774 [Cs].
Picard, R. (1997). “Affective Computing | The MIT Press.” Press, The MIT. (https://mitpress.mit.edu/books/affective-computing).
Reed, Mallory. (2018). “The Classification of Artificial Intelligence as ‘Social Actors.’” 50.
Ribeiro, Tiago, and Ana Paiva. (2012). “The Illusion of Robotic Life: Principles and Practices of Animation for Robots.”
Roesler, Oliver; Bagheri, Elahe. (2021). "Unsupervised Online Grounding for Social Robots" Robotics 10, no. 2: 66. https://doi.org/10.3390/robotics10020066
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714
Scheel, P. D. (1993). Robotics in industry: A safety and health perspective. Professional Safety, 38(3), 28.
Short, Elaine, Justin Hart, Michelle Vu, and Brian Scassellati. (2010). “No Fair!! An Interaction with a Cheating Robot.” Pp. 219–26 in.
Shum, Heung-Yeung, Xiaodong He, and Di Li.(2018). “From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots.” ArXiv:1801.01957 [Cs].
Skjuve, Marita, Asbjørn Følstad, Knut Inge Fostervold, and Petter Bae Brandtzaeg. (2021). “My Chatbot Companion - a Study of Human-Chatbot Relationships.” International Journal of Human-Computer Studies 149:102601. doi: 10.1016/j.ijhcs.2021.102601.
Spinoza, B. (1675). “ETHICS PART III. ON THE ORIGIN AND NATURE OF THE EMOTIONS.” 34.
Steinbauer, Gerald. (2013). “A Survey about Faults of Robots Used in RoboCup.” Pp. 344–55 in RoboCup 2012: Robot Soccer World Cup XVI. Vol. 7500, Lecture Notes in Computer Science, edited by X. Chen, P. Stone, L. E. Sucar, and T. van der Zant. Berlin, Heidelberg: Springer Berlin Heidelberg.
Stonier, Tom. (1992). “The Evolution of Machine Intelligence.” Pp. 107–33 in Beyond Information: The Natural History of Intelligence, edited by T. Stonier. London: Springer.
Sunstein, C. R. (1996). Social norms and social roles. Columbia Law Rev. 96, 903–968
Taylor, D. A., & Altman, I. (1975). Self-disclosure as a function of reward-cost outcomes. Sociometry, 38(1), 18–31. https://doi.org/10.2307/2786231
Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind 59(236):433–60.
Turkle, Sherry. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
Wang, F., Pan, F., Shapiro, L. A., and Huang, J. H. (2018). Stress induced neuroplasticity and mental disorders 2018. Neural Plast. 2018:5382537. doi: 10.1155/2018/5382537
West, Stephen. (Hosts). (2017, March 12). Wittgenstein pt. 1 (No. 97) [Audio podcast episode]. In Philosophize this. https://www.philosophizethis.org/podcast/wittgenstein-pt-1
Wooldridge, Michael, and Nicholas R. Jennings. (1995). “Intelligent Agents: Theory and Practice.” The Knowledge Engineering Review 10(2):115–52. doi: 10.1017/S0269888900008122.
Zheng, Z., Gu, S., Lei, Y., Lu, S., Wang, W., Li, Y., et al. (2016). Safety needs mediate stressful events induced mental disorders. Neural Plast. 2016:8058093. doi: 10.1155/2016/8058093
Zhou, Hao, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. (2018). “Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory.” ArXiv:1704.01074 [Cs].