Ana Paiva: A physical entity that moves around in the world, that acts upon the world and it should be autonomous.
Kryzstof Tchon: I feel that the robot is a machine, a sort of machine that can replace a human in physical and also intellectual activity
Mattias Jacobsson: It's a physical embodiment of sorts, but it doesn't necessarily conform to this sort of normal vision that people might have of robots.
Peter McOwan: What I understand by a robot is a form of intelligent tool, a way of being able to use mechanical devices to support human beings in being able to do the sorts of things human beings want to do.
Secundino Correia: I see a robot it's like computer, with a different aspect and a different embodiment, but it's a computer.
Ruth Aylett: An attempt to put intelligent systems into some kind of a body. We think embodiment in this area of research is really important.
Dave Griffiths: Some kind of technological artifact which we believe has something of its own life or some idea of urgency.
Carsten Zoll: In Lirec we are dealing with social robots. We, as psychologists, define social robots as robots that are able to address the social needs of the user.
Adam Miklosi: The robots are, well, more or less some things that can help people. I have more of an idea about social robots or robots that are working or at least living in the space that are occupied by people.
Kerstin Dautenhahn: There are generally three different aspects to a robot. There is the ability to perceive, to perceive the environment. There is the ability to act upon the environment. There is the ability to reason or to make decisions, to have this link between the perception and the action.
Peter McOwan: Lirec is a very large multi-disciplinary project looking at developing new forms of technology for artificial companions. Those are robotic companions or virtual character companions. The key elements to the project that really make it different are the fact that we are using real long-term studies in genuine environments in an office, in a house. Also, we are looking at this rather intriguing idea of migration that the entity, the intelligence within a robot can move from one physical body to another, or possibly to a handheld device. There are lots of very interesting technical questions, but also some really fascinating psychological and kind of broader societal questions that are in the project and those are all fused together in Lirec, which is an exciting way to explore the next step in human robot relationships.
Adam Miklosi: The term Ethology basically means studying behavior from a biological point of view. Ethologists basically believe that explanations of behavior can originate from biological causes whether they are mental or physical causes but this is what we use as an explanation of them. The other aspect of ethology is that basically you draw your conclusion from natural behavior of the animal. The whole idea for our research was need not really coming from robotics, it was purely basically interest in the way how dogs have been changed during evolution and making them such a good that [inaudible 3:55] as to live with humans. Actually the dog is a very good model for robotics. Robots should be anything but a human. I learned that during Lirec actually. You can imagine that dogs are already so successful and they are not human like at all if you take it really seriously. Why is it not a good model for [inaudible 4:14], especially if our technology at the moment is very far from being so sophisticated as a human is already? How should another agent, whether that be a dog or robot, behave or perform in order to get the anthropomorphizing from humans, or in other words, that humans allow these creatures to live with them together, which is, I think, is a big honor if you'd like, or a big possibility for a creature. What are the traits that are useful to the project? I think that, for example, one is what we think is important is the interest, social interest. That's what dogs clearly show, that our interest in what is going on. A dog puppy is looking in the eyes, is looking at the head, is following the human around all the time and so on. This is what also, to some extent, should be displayed by a robot. If you see that in a robot, you immediately feel, maybe, some sort of social contact. I think that helps humans to accept this robot in their environment.
Kerstin Dautenhahn: We have a lot of recent activities in the project, thinking about the computational architectures, how to organize that memory. What should the robot remember? It will clearly not remember every single thing that happens in every split second. You need some AI procedures and algorithms that are making sure that the robot remembers things that are meaningful and that are also supportive of the relationship. [Inaudible 5:53] Portugal is also involved in this research.
Ana Paiva: Fatima is an architecture, software, agent architecture that has emotions embedded. It's a symbolic architecture that allows us to generate behaviors for our companions. The Agent mind is what we call the entity, the software entity that allows us to generate the intelligent behavior of our companions, our agents. When the agents say, “Hello. Good morning,” when the agent plays a move, when the agent decides to go to that door or that door, or when the agent gets sad, that is all controlled by this agent mind. What we call agent mind, which is the architecture. They are deciding what the companion is going to do.
Ruth Aylett: We have mechanisms within our overall architecture which do allow us to take something quite abstract coming from the bit of our architecture that our colleagues in Portugal had developed which we often call the agent mind, which is doing planning, which is running the affective model, which is doing all the sort of artificial intelligence stuff. Mapping that down through a number of different layers unto the actual physical capabilities of the platform embodiment we happen to be located in right now.
Peter McOwan: What is the emotion tracker system we've developed? It uses software to be able to recognize a person's face even in a fairly cluttered scene. It's been trained by showing it lots and lots of examples of different faces pulling different kinds of expressions. It's able to actually recognize the expression that you are making on your face, whether you're smiling, whether you're surprised. That information can then obviously be fed into the perceptual element of a robot or a graphical character so that that character has an idea of the facial expressions that you are pulling, that you are making an interaction with it. If you think about the way that human beings work, one of the kind of social metaphors that we use constantly is our internal feelings are expressed through the external expressions on our faces. Effectively, to be able to understand this kind of social messaging system that human beings have developed to a very, very fine art, robots have got to be able to recognize expressions on faces as well as being able to make the appropriate expressions back. That is one of the kind of key channels by which humans and humans interact with one another and we want to be able to carry that across into the human robot interaction.
Ana Paiva: At INESC, we tried to explore companions, robotic companions and even virtual companions in the context of games. Since we are interested in these games, it should be a companion that plays a game with you, that helps you along playing a game, so have a chess companion, which is this ICAT playing chess with you and helps you through the process of learning. The ICAT scenario tries to perceive the emotion of the kids and it's affective recognition that was done in Queen Mary was incorporated in the ICAT scenario. In all the tests we've done and everything was in collaboration with Queen Mary.
Kryzstof Tchon: In the Lirec, our group is responsible for designing the robot. We have designed a robot, a robotic companion, which is based on the mobile platform, two-wheel mobile platform, with torso, a pair of hands, and a head. This head is very interesting. It is a head that can express emotions so even this head is sometimes treated as a specific separate part of the robot. The head lives its own life and it is called EMYS, Emotive Head System. At the same time, it is a name of an European turtle, pond turtle. The head resembles the head of the turtle, and simultaneously it is capable of expressing some emotions. We have a robot called FLASH and this robot is equipped with a very interesting emotive head. That's our achievement.
Kryzstof Arent: The contribution is from INESC and also from Queen Mary.
Kryzstof Tchon: Yes, because they have managed to make this head to express emotions dynamically. Our approach is more static, but they, due to careful programming, could make the head express emotions dynamically. It's much more convincing for people. This was a very important contribution. The other one I would say is the contribution into whole face recognition, things like that. This is also very important. From these two partners, we could obtain some results how to process vision images. In other words, how they are able to see the environment. That's very important.
Ana Paiva: The EMYS head that plays with you in a social setting, and it's your companion and you see a group of friends playing a board game, that is the idea of a companion situation, where this robotic head is a character that plays a social game with you like a board game. All these are different experiences of a robotic companion in different kinds of games and entertainment scenarios. We were involved in creating the somatic expressions of EMYS. It's interesting because the first expressions that we designed were very simple, because the platform is very rich, but it still is limited. It's not like our face. The degrees of freedom are much, much smaller. Then, we really need to do animation, this is a process that animators do. Let's get inspiration from Disney, let's get inspiration from cartoons, because the EMYS head is very cartoonish, so let's get inspiration from The Muppet Show. Let's get the inspiration from that and really design the animations in a really cartoon-like, following principles of animation. We are moving away from robotics and say, “No, no, no, maybe because of the type of head we should look at the principles of animation.” That's what we did. We went to look at principles of animation, we looked into these cartoons and so on, and we re-designed. The new expressions are a lot more alive.
Kheng Lee Koay: In the context of Lirec, migration basically means that the intelligence of the companion can migrate, or move itself, from one embodiment to another embodiment.
Ruth Aylett: Our overall program is called Spirit of the Building. The idea here was that we could produce companions which have sort of an integral part in a particular environment. Well, a companion ought to be something that is able to share your experience, we feel. The robot can share your experience while you're in the office. The graphical panel could share your experience in a particular area where you had a graphical panel. What about other parts of the building, for instance? We thought, well, handheld devices, they can go with you everywhere. How about the essence of your companion, the personality, some of the memory is able to leave the robot embodiment, which has advantages but disadvantages, and enter a different embodiment? We have these three different embodiments and we have the team buddy. We have the graphical character and we have a mobile companion as well. Then we have migration abilities between them. We have the idea of a scenario, for instance, where you would enter the building and by the door is a big panel. In fact, we have installed these now. The Spirit of the Building could greet you there. If you didn't know where you were going, which let me say is particularly common in our university, which has dreadful directional signer, it could say, “Well, do you want me to take you there?” It could leap into your phone, take you up to the office, and then you could meet the robot, it would then jump back into the robot and interact with you as its robot self. That's the scenario which we've been exploring, essentially, and a very interesting one. Migration really is quite an essential part of what we've been doing.
Peter McOwan: It allows us to be able to move between virtual platforms and physical platforms, so there are certain circumstances where to undertake a particular task, you need a physical body to be able to influence things, move things around within a room, to carry things and so on. There are other circumstances where potentially you want to continue that companionship relationship but it's not necessary to have a physical body. In that case, what can happen is the entity, the intelligence can move from the physical robot onto, for example, a handheld device which means you can continue and take your companion with you where it can continue to learn about you and build up a set of memories which then can come back and be reloaded into the physical body. You have this kind of continuation of interaction across various different kinds of bodies, basically using the right kind of body to achieve the right kind of results.
Secundino Correia: We have before a software for kids for learning music called Little Mozart. It's in the market. It's a traditional software. Now we are trying to put in the software all the knowledge that we have been taking from the project, so all the emotional aspects. Mozart can now act like a companion that helps children understand why some intervals in the music are good, others are not, and why. It's an agent in the sense that it's a kind of personality. It knows things, it knows music, and it knows some rules on how to teach to others how to learn to compose melodies. It can remember vast interactions. It can guide you. It lives in a computer, or in an iPad, or iPod, or iPhone, never mind. It can migrate between different platforms. It's the same when it migrates to different platforms.
Kerstin Dautenhahn: Our goal is to focus on the robot house showcase, so we study robots as potential companions for people living in a house. Here it's very, very, important that, A, the robot has to be useful and, B, it should do its job in a social way. When we do experiments we are making sure that it looks like a house where maybe one of the friends or relatives could potentially live. We find that it relaxes people more. It helps them to envisage that they could actually have, not this robot, but some robot like that in the future, in their house. People use this term that it is a more ecologically valid context, meaning people buy into it better than having these experiments run in the laboratory. It's more natural than that.
Kheng Lee Koay: The main robots that we work on are the sunflower robots. It was designed by me during the second phase of the project because we need a system that had more expressive features. We decided to focus on sound, light, and physical body movement.
Kerstin Dautenhahn: We used the robot as a basic mobile platform. It can navigate in the house. It can remind people. It knows about the schedule of people. It can alert them to certain events in the house. This house has been designed as a smart house also. There are more than 40 sensors distributed in this house. The robot is able to perceive when someone rings the doorbell, when the kettle is on, when the kettle goes off, or when someone opens the fridge door. Then we have a robot that we use for a physical assistant. It's a KOBOT 3, designed by [inaudible 20:02] in Germany. It's made of industrial-type components. It has a large manipulator, so it is able to pick up objects. For example, it is able to pick up your woolen hat or your coat and hand it to you in scenarios where maybe people are preparing to go outside. The robot helps them to fetch some clothing. The third robot that we use is the AIBO, the SONY AIBO robot, for the remote collaborative games between two people that are remotely situated. We are using that robot as part of a game that two people can play, whereby the role of the robot is a participant in the game and also provide a tactile dimension.
Kheng Lee Koay: The robot house helped Lirec in the sense that it provided a realistic environment and naturalistic environment for the user that we can test the system on. It also provided a stable platform.
Mattias: Our group has mainly focused on two things. One, is we conduct studies long-term studies on how people actually live with robots out in the wild, in the real world to see what kind of engagement they have with robots and sort of the playful aspects. How do they actually spend time with robots? What do they do with it? Try to learn from that and develop sort of design cases and scenarios, interaction design prototypes and methods to capture what people actually would like to do with robots. More from a design point of view, how we're going to handle this new material that are robots really are. How we can make something useful in the future with this. We tried to focus on how people spend time with robots and also what they do together with these sorts of agents.
Ruth Aylett: The scenario we've been looking at with our robots is a team buddy. The robot is located in a research lab with a crowd of researchers. Can we create something that is companionable that acts as a member of the research team that doesn't annoy you, that doesn't fall over the furniture, that doesn't ask you to plug it in every 30 seconds? Those are some of the issues we're looking at. Put a robot into a lab with a crowd of people, it has many people with whom it can interact. It can acquire knowledge about those people. Individuals might tell it things, for instance. It could access information about individuals from the internet as well, for instance, from their Facebook page. What if it tells a member of the team something about another member of the team that that person really didn't want to be known publicly? That could easily happen. What if you get information from a verbal interaction, which essentially is confidential and which you don't want the whole world to know? Remember, a robot has a memory. This is why we're interested in memory issues. Do we want it to remember everything? If it remembers everything, should it never forget anything? If it never forgets anything, who should it tell? What mechanisms do you need to put into the robot so it has some concept of not just blabbing everything it knows to anybody who asks? That is also an ethical issue. Because the robot is modeling emotional states, it will tag items going into its memory with the strength of its modeled emotions at the time when it acquires the memories. As in the human case, we tend to remember things better that are associated with high levels of emotion. You could say those are burned into my brain. Not quite burned into the robot's brain, but if the item of information when it gathers it is associated with a high level of emotion, that will take to make it more retrievable in memory as well. We tangle with these things with a model of affective state in the robot. That's another very useful thing about having an affective model, actually. This allows you to give a weight to items in memory in a very natural contextual sort of way. We've implemented a forgetting mechanism which means naturally the memory will tend to lose the sort of detail that humans tend to lose over time. It's partly pragmatic. You don't want to remain everything or you'll go crazy in the human case and you run out of memory in the robot case. It has also to do with some of these ethical issues. We start to forget things that we probably don't want to be able to tell people over time. Yes, the memory is a human-like memory in the sense that we're drawing on research about human memory and we're trying to build something which has some of those features rather than something that's more like a database.
Dave Griffiths: Germination X is an online social game about permaculture, which is using the technology and the research of Lirec in a very different context to the one that it's been studied in. Really, we're trying to see if this kind of sociable technology, if put into this kind of environment, can help us develop the connections between people in an online game, a kind of Facebook environment. It's very much taking Farmville as using the same metaphors in technology but instead of applying this very particular view on farming and consumption as being this… The logical conclusion is this kind of industrial farming techniques. What happens if we put in a very different assumption of the world? Does this result in different strategies of people playing the game? The difference between screen-based agents and robotic agents are that with screen-based agents you've got a lot more information about the world. With robots, it's a lot more complicated to understand the real world. There are a lot of things that you can take for granted with screen-based agents or characters because the human is sort of joining them in their world rather than the other way around.
Carsten Zoll: Most of our partners really investigated companions because they are technicians and they built their companions. Our road is to have more of a focus on the user. In the Lirec project, we are interested in comparing human-human relationships and human-companion relationships. Are they similar or are they different? We found out that they are both because you have some dimensions that apply for both human-human and human- companion relationships. For example, people perceive relationships in terms of power or intimacy. They're also different. For example, the dimension of intimacy has to be further distinguished for human-companion relationships. Here, we can distinguish between a psychological intimacy. This deals with the question how good does the companion really understand the user and is there some exchange of intimate information? Physical intimacy, which deals to the question is there any physical contact between the two interactive partners? We found out what is extremely important for the user is control, control over the interaction and also control over the data. They don't have that much trust at the moment, but maybe this is a thing that should be addressed to really develop companionship. Normally trust comes from successful interaction, so I think you have to bring people together with companions and the companions have to show that they are trustworthy.
Kerstin Dautenhahn: I think that the topic of robot companions for home assistance and possibly other scenarios is a very, very interesting area, not only for research, but also for possible applications. For me, it's a very exciting time.