Part of Project Lirec: Notes (and bits pasted) from Deliverable 5.1
An understanding of social intelligence is needed for creating agents capable of long term companionship with humans.
What is social intelligence?
A long term artificial companion needs to apply the same strategies for creating and maintaining relationships if it's to be effective.
Emotions and personality need to be expressed for people to gain an empathic relation with an agent.
Companions need to have a consistent personality if they are to maintain long term relationships.
See also: Theory of Mind in Robotics
A theory of mind in a robot is a model of a users (or another agent's) emotional state, gained by measuring affective state from sensor input - such as facial expression of tone of voice. In this way an agent may attempt to return an empathic relationship with a human. Such a model must include a users:
A companion needs to adapt and evolve based on past experiences. The companion must adapt to a users personality in order to match it better as time passes.
Human intelligence includes both:
Socially intelligent agents need to have the appearance of social understanding and also recognise other agents or humans in order to establish relationships with them.
This social information is part of a robot's environment, and sensed through it's understanding of the outside world.
*The expectation of social interaction of an agent depends on it's form, a human form is more natural, but also creates very high expectations - expectations which cannot be currently met.*
Establishing relationships might be easy - for instance, it's quite possible for people to have relationships of a form with inanimate objects, but maintaining and developing a relationship is harder.
Bickmore & Picard - relationship maintenance strategies, based on human-human relationship maintenance:
The theory states that people tend to avoid unstable cognitive configurations. For instance if agent A knows and likes agent B, and they both are aware of, and have positive feelings towards object C then there is balance, similarly if they both have negative feelings towards object C. If they disagree then there is imbalance, which can be resolved from agent A's perspective by one of three steps:
The last option takes more work than the other two, and so there is also a concept of cost for maintaining social balance.
Embodied conversational agents - these use face to face conversations in an attempt to simplify human/computer communication.
REA: The automated real estate agent: http://www.media.mit.edu/gnl/projects/humanoid/ Attempts to sell people houses, keeps a track of task talk and small talk, and can interleave them.
Laura & FitTrack: An exercise advisor designed to explore long term relationships. Tested with 100 people, users with the relationship building features added were “more likely” to continue using the system. http://www.ccs.neu.edu/home/bickmore/publications/CACM04.pdf
Avatar Arena: Multi characters interacting with each other on the user's behalf: http://www.ltg.ed.ac.uk/magicster/deliverables/rist.pdf
2SGD Model: Agents for interacting with groups: http://gaips.inesc-id.pt/gaips/shared/docs/prada-iva2005.pdf
Sociable physical robots
Kismet: Animatronic head which responds to people's actions, and capable of “proto conversation” through babbling: http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/
Valerie the roboceptionist: http://www.msnbc.msn.com/id/4306856
Theoretical foundations start with Darwin's study of expressions in humans and animals. James-Lange theory of emotion - maps emotions with physiological responses.
State that emotions are the result of evaluating an event or situation.
(Moffat, 1997) states, “personality is the name we give to (an agent’s) reactions tendencies that are consistent over situations and time”.
FAtiMA: Used in Fear Not! (is this part of ION?)
Personality is a combination of:
Needs given weights, Reactive level/Deliberate level - more goal oriented
More info: http://www.micropsi.org/