Social robot


A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. Like other robots, a social robot is physically embodied. Some synthetic social agents are designed with a screen to represent the head or 'face' to dynamically communicate with users. In these cases, the status as a social robot depends on the form of the 'body' of the social agent; if the body has and uses some physical motors and sensor abilities, then the system could be considered a robot.

Background

While robots have often been described as possessing social qualities, social robotics is a fairly recent branch of robotics. Since the 1990s, artificial intelligence and robotics researchers have developed robots which explicitly engage on a social level.
The evolution of social robots began with autonomous robots designed to have little to no interaction with humans. Essentially, they were designed to take on what humans could not. Technologically advanced robots were sent out to handle hazardous conditions and the assignments that could potentially put humans in danger, like exploring the deep oceans or the surface of Mars. Advancing these original intentions, robots are continually being developed to be inserted into human-related settings to establish their social aspect and access their influence on human interactions. Over time, social robots have been advanced to begin to have their own role in society.
Designing an autonomous social robot is particularly challenging, as the robot needs to correctly interpret people's action and respond appropriately, which is currently not yet possible. Moreover, people interacting with a social robot may hold very high expectancies of its capabilities, based on science fiction representations of advanced social robots. As such, many social robots are partially or fully remote controlled to simulate advanced capabilities. This method of controlling a social robot is referred to as a Mechanical Turk or Wizard of Oz, after the character in the L. Frank Baum book. Wizard of Oz studies are useful in social robotics research to evaluate how people respond to social robots.

Definition

A robot is defined in the International Standard of Organization as a reprogrammable, multifunctional manipulator designed to move material, parts, tools or specialized devices through variable programmed motions for performance of a variety of tasks. As a subset of robots, social robots perform any or all of these processes in the context of a social interaction. It interacts socially with humans or evokes social responses from them. The nature of the social interactions is immaterial and may range from relatively simple supportive tasks, such as passing tools to a worker, to complex expressive communication and collaboration, such as assistive healthcare. Hence, social robots are asked to work together with humans in collaborative workspaces. Moreover, social robots start following humans into much more personal settings like home, health care, and education.
Social interactions are likely to be cooperative, but the definition is not limited to this situation. Moreover, uncooperative behavior can be considered social in certain situations. The robot could, for example, exhibit competitive behavior within the framework of a game. The robot could also interact with a minimum or no communication. It could, for example, hand tools to an astronaut working on a space station. However, it is likely that some communication will be necessary at some point.
Two suggested ultimate requirements for social robots are the Turing Test to determine the robot's communication skills and Isaac Asimov's Three Laws of Robotics for its behavior. The usefulness to apply these requirements in a real-world application, especially in the case of Asimov's laws, is still disputed and may not be possible at all). However, a consequence of this viewpoint is that a robot that only interacts and communicates with other robots would not be considered to be a social robot: Being social is bound to humans and their society which defines necessary social values, norms and standards. This results in a cultural dependency of social robots since social values, norms and standards differ between cultures.
This brings us directly to the last part of the definition. A social robot must interact within the social rules attached to its role. The role and its rules are defined through society. For example, a robotic butler for humans would have to comply with established rules of good service. It should be anticipating, reliable and most of all discreet. A social robot must be aware of this and comply with it. However, social robots that interact with other autonomous robots would also behave and interact according to non-human conventions. In most social robots the complexity of human-to-human interaction will be gradually approached with the advancement of the technology of androids and implementation of a variety of more human-like communication skills

Social interaction

Researches have investigated user engagement with a robot companion. Literature present different models regarding this concern. An example is a framework that models both causes and effects of engagement: features related to the user's non-verbal behaviour, the task and the companion's affective reactions to predict children's level of engagement.
Many people are uneasy about interacting socially with a robot and, in general, people tend to prefer smaller robots to large humanoid robots. They also prefer robots to do tasks like cleaning the house rather than providing companionship. In verbal social interactions people tend to share less information with robots than with humans. Despite initial reluctance to interact with social robots, exposure to a social robot may decrease uncertainty and increase willingness to interact with the robot, and research shows that over time people speak for a longer time and share more information in their disclosures to a social robot. If people have an interaction with a social robot that is seen as playful they may be more likely to engage with the robot in the future.

Symbolic Behaviour and Social Perception

As AI agents and robotic systems perform actions—such as gestures, facial expressions, or formalized outputs—that observers interpret as symbolically meaningful. These behaviours, while typically generated through programmed instructions or statistical models, can resemble human social conventions and are often perceived as conveying intent or emotion. Studies in human-robot interaction suggest that anthropomorphic design features and symbolic cues can influence users' interpretations, potentially increasing engagement, perceived trust, and emotional responsiveness during interaction. Research in social cognition indicates that people attribute meaning or emotional states to artificial agents, particularly in contexts where nonverbal cues, such as eye contact or nodding, are present.

Contextual framing and situated social experience

Recent research in human-robot interaction has proposed nonessentialist perspectives on how robots come to be perceived as social actors. According to Kaptelinin and Dalli, the "sociality" of robots does not arise from inherent design features or fixed human tendencies, but from how people experience robots as part of meaningful collaborative contexts as a whole. They term this process contextual framing. This approach emphasizes that the perception of robots as social partners is augmented and depends on factors such as shared goals, collaboration, space/environment, and personal significance rather than on designed appearance and behaviours alone. While Kaptelinin and Dalli reject essentialist views that attribute perceptions of sociality solely to robot design, they acknowledge that design choices can influence how interaction contexts develop and are a part of a greater whole that shapes the social meaning of an encounter. Alongside things like designed aesthetics, designed behaviors and programmed/pre-planned actions, dimensions such as contextual importance, contextual agency, collaboration, and personal impact of the robot must be considered to gain a fuller understanding of the entire contextual framing of a humans social experience with a robot. Design features, and whether or not a robot is perceived as social, is mediated by the context in which the interaction arises.

Societal impacts

The increasingly widespread use of more advanced social robots is one of several phenomena expected to contribute to the technological posthumanization of human societies, through which process "a society comes to include members other than 'natural' biological human beings who, in one way or another, contribute to the structures, dynamics, or meaning of the society."

Uses in healthcare

Social robots have been used increasingly in healthcare settings and recent research has been exploring the applicability of social robots as mental health interventions for children. A scoping review analyzed the impacts that robots such as Nao, Paro, Huggable, Tega and Pleo have on children in various intervention settings. Results from this work highlighted that depression and anger may be reduced in children working with social robots, however anxiety and pain yielded mixed results. Distress was found to be reduced in children who interacted with robots. Finally, this scoping review found that affect was positively impacted by interaction with robots—such that children smiled for longer and demonstrated growth-mindsets when playing games. It is worth noting that robots have increased benefits in that they can be used instead of animal-assisted therapy for children who are allergic or immunocompromised. Sanitation is a necessary issue to consider, however with washable covers or sanitizable surfaces, this becomes less of a problem in medical settings. Another review analyzed data from previous studies and found further support that social robots may reduce negative symptoms children experience in healthcare settings. Social robots can be used as tools for distracting children from procedures, like getting a shot, and have demonstrated the ability to reduce stress and pain experience. Children who interacted with both a psychotherapist and robot assistant for therapy experienced reduced anger, anxiety, and depression when coping with cancer compared to a control group. There is some evidence that supports that free play with a robot while hospitalized can help children experience more positive moods. More work needs to be done to analyze the impact of social robots on children in psychiatric wards, as evidence revealed that some children may dislike the robot and feel it is dangerous. Overall, further research should be conducted to fully understand the impact of social robots on reducing negative mental health symptoms in children, but there appears to be advantages of utilizing social robots in healthcare settings.
Social robots have been shown to have beneficial outcomes for children with Autism-spectrum disorder. As many individuals with autism-spectrum disorder tend to prefer predictable interactions, robots may be a viable option for social interactions. Previous research on the interactions between children with ASD and robots has demonstrated positive benefits, for instance shared attention, increased eye contact, and interpersonal synchrony. Various types of robots have the potential to reap these benefits for children with ASD—from humanoid robots like KASPAR, to cartoonish robots such as Tito, to animal-like robots like Probo, to machine-like robots such as Nao. One problem that may hinder the advantages of social robots as social interaction tools for children with ASD is the Uncanny Valley, as the eerily human-likeness of the robots may be overstimulating and anxiety-inducing. It appears that social robots provide an opportunity to increase social skills in children with ASD, and future research should investigate this topic further.
Individuals with cognitive impairments, such as dementia and Alzheimer's disease, may also benefit from social robots. In their study, Moro et al. utilized 3 social robots types—a human-like robot, Casper; a character-like Robot, The Ed robot; and a tablet—to help six individuals with Mild Cognitive Impairment make a cup of tea. Results demonstrated that, to an extent, the humanoid robot was most engaging to individuals with cognitive impairments, likely due to the expressiveness of its face compared to the minimal expression of Ed and the tablet. Participants also anthropomorphized the human-like and character-like robot more so than the tablet by addressing them and asking questions, further indicating a preference for the social robots. Additionally, participants perceived the human-like robot to be useful in both social situations and in completing activities of daily living, whereas the character-like robot and tablet were seen as only useful for activities of daily living. Another study by Moyle et al. investigated the impact that providing an individual with dementia a robot toy, Paro, versus a plush-toy would have on caregiver and family members' perception of the individuals' well-being. This study highlighted the ways in which some long-term care facilities may have minimal stimulation for dementia patients, which can lead to boredom and increased agitation. After completing the trial, caregivers and family members were asked to assess the individual with dementias' well-being and, overall, the group that interacted with Paro was perceived to be happier, more engaged, and less agitated. One of the main issues with utilizing Paro, despite its perceived benefits, is the cost—future research must investigate more cost effective options for older adult care. Another issue of conducting research between individuals with cognitive impairments and social robots is their ability to consent. In some cases, informed consent by proxy can be utilized, however the benefits and risks should be weighed before conducting any research. Long-term research could show that residents of care home are willing to interact with humanoid robots and benefit from cognitive and physical activation that is led by the robot Pepper. Pepper was also used in assessing the feelings of safety and security the robot provided for older individuals. For these individuals, security is associated with trust and confidence developed by interpersonal relationships. Using videos and questionnaires, both safety and security ended up on the positive side for the participants and how they felt. Another long-term study in a care home could show that people working in the care sector are willing to use robots in their daily work with the residents. But it also revealed that even though that the robots are ready to be used, they do need human assistants, they cannot replace the human work force but they can assist them and give them new possibilities.
Social robots have been used as mental well-being coaches, for students, in public, and at the workplace. Robotic mental well-being coaches can perform practices such as positive psychology and mindfulness. Users' perceptions of robotic mental well-being coaches have been shown to depend on the robot's appearance.
The ethics of social robots' use in healthcare should also be mentioned. One potential risk of social robots is deception—there may be an expectation that the robot can perform certain functions when it actually cannot. For example, with increased human-likeness and anthropomorphic traits, humans interacting with robots might assume the robot to have feelings and thoughts, which is misleading. Isolating older adults from humans is also a risk of social robots in that these robots may make up a significant amount of the individual's social interaction. Currently there is little evidence about the long-term impacts this limited human contact and increased robot interaction may have. Some social robots also have a built in telepresence capacity that can be utilized such that individuals can videoconference with family, caregivers, and medical staff, which may decrease loneliness and isolation. The video capability of some robots is a potential avenue for social interaction and increasing accessibility of medical assessments. Dignity for persons interacting with robots should also be respected—individuals might find some robots, like the cuddly toy-like Paro, to be infantilizing, and future investigations should explore how to best increase autonomy of patients interacting with robots. Furthermore, privacy is another ethical concern in that some social robots can collect and store video data or data from sensors. The stored data is at risk to be stolen or hacked into, which negatively impacts individual privacy. Safety of individuals interacting with robots is another concern in that robots may accidentally cause harm, like by bumping into someone and causing them to fall. Ethical considerations should be taken into account before introducing robots into healthcare settings.