Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
lirec_notes [2009-01-06 15:00] – davegriffiths | lirec_notes [2009-01-08 10:40] – davegriffiths | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | LIving with Robots and InteractivE Companions | + | ==== LIving with Robots and InteractivE Companions |
- | LIREC aims to establish a multi-faceted theory of artificial long-term companions (including memory, emotions, cognition, communication, | + | < |
- | Key areas of the research will focus on are: | + | http:// |
- | ==Long term companions== | + | ==Agent platforms== |
- | Robots and agents which build up a long term relationship with their users. | + | |
- | ==Agent migration== | + | The platforms currently under consideration for long term companions |
- | In order to maintain a long term companionship, | + | * Mobile |
- | * Mobile | + | * Fixed robots |
- | * Fixed robot | + | * Handheld |
- | * Handheld | + | * Fixed graphical |
- | * Fixed graphical | + | |
- | This migration between these devices, and how people relate to this migration | + | ==Migration== |
+ | |||
+ | An interesting feature of the research is migration between these platforms. Agents which need to build up a long term relationship with their users will have to switch forms depending on the needs of the user at different times. The migration of an agent between | ||
===Scenarios=== | ===Scenarios=== | ||
- | The scenarios are to be considered preliminary ideas for showcases for the technology developed for lirec, and are designed | + | In order to test and showcase |
==Heriot Watt== | ==Heriot Watt== | ||
Spirit of the building: | Spirit of the building: | ||
- | - Team buddy, mobile robot, collective memory for a lab team | + | - Team buddy, |
- | - Personal guide - for navigating around a university campus, | + | - Personal guide - for navigating around a university campus, |
- | - In the wild - a gossip/chat robot - appears on a large screen in social area | + | - In the wild - a gossip/chat robot - appears on a large screen in a social area |
==INESC-ID== | ==INESC-ID== | ||
- Game companion for young children | - Game companion for young children | ||
- | - Personal trainer | + | - Personal trainer |
- | - Welcome to the jungle - talk to game characters through a robot, robot can alternate | + | - Welcome to the jungle - talk to game characters through a robot which can alternate between real and game world |
- | | + | |
- | ==University of Hertfordshire== | + | ==The University of Hertfordshire== |
- | - Fetch and carry, help with physical impairment or convenience | + | - Fetch and carry, |
- | - Cognitive prosthetic - memory aid for tasks etc | + | - Cognitive prosthetic - memory aid for tasks |
- | - Telepresence card player - robot mediates play | + | - Telepresence card player - a robot mediates play between two people |
- | - Teaching | + | - Teaching |
- | - Travelling companion - agent migration, to stay with user during home, work, shopping. | + | - Travelling companion - agent migration, to stay with user during home, work, shopping |
+ | |||
+ | ===Experimental testing=== | ||
+ | |||
+ | All scenarios developed are to be experimentally tested and showcased to the public. | ||
===Architecture=== | ===Architecture=== | ||
- | Has to: | + | The technology developed for Lirec is shared between the research partners, and has to: |
- Run on very different platforms | - Run on very different platforms | ||
- Reuse code across these platforms | - Reuse code across these platforms | ||
- Support migration at runtime between platforms | - Support migration at runtime between platforms | ||
- | |||
- | Platforms can consist of 4 main types, mobile robot, fixed robot, handheld device or fixed graphical system. Each has it's inherent restrictions. | ||
==Existing robot architectures== | ==Existing robot architectures== | ||
- | In the whole, there is a lack of sharing of this kind of technology. This is partly because generalising is hard in this field, considering all types of robots possible. However, Lirec has to generalise as it's using a wide variety of architectures. | + | In the whole, there has historically been a lack of sharing of this kind of technology. This is partly because generalising is hard, considering all types of robots |
- | NASREM/NIST RCS - NASA + ESA use a generic system with their subcontractors. | + | While some existing architectures are available, none of them address the needs of Lirec, given the focus is: |
+ | | ||
+ | - Working across different platforms | ||
==Methodology== | ==Methodology== | ||
- | In the past there have been 2 broad approaches to robot design: | + | In the past there have been 2 broad approaches to [[robot design]]: |
- | * Heirarchical, model based planning = expensive to maintain accurate world state model | + | * Hierarchical, model based planning = expensive/ |
- | * Behavioural approach = less state, local decisions, liable to local minima, opaque to program | + | * Behavioural approach = less state, local decisions, |
- | This can be summed up as predictive vs reactive. | + | This can be summed up as "predictive vs reactive". |
- | Current thinking | + | The plan is to use a hybrid approach, example |
The architecture will consist of 3 layers of abstraction: | The architecture will consist of 3 layers of abstraction: | ||
- | * Level 1 - device layer, | + | * Level 1 - The device |
- | * Level 2 - architecture dependant -> logical | + | * Level 2 - Logical |
- | * Level 3 - ION, device independant | + | * Level 3 - ION, platform independent |
+ | |||
+ | Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all platforms, and different implementations of the same competency will be needed for different platforms. | ||
+ | |||
+ | Competencies table | ||
+ | ^ Actuation ^^^^ Sensing ^^^^^^ | ||
+ | | **Speech/ | ||
+ | | Text to speech | Gesture execution | Move limb | Grasp/Place object | User recognition | Face detection | Speech recogn | Localization | Obstacle avoidance | Battery status | | ||
+ | | Non-verbal sounds | Lip sync | Follow person | | Obj recognition | Gesture recognition (simple set) | Non-verbal sounds | Locate person | Locate obj | Competence execution monitoring | | ||
+ | | | Facial expression | Path planning | | | Emotion recognition (simple set) | | | User proximic distance sensing/ | ||
+ | | | | Gaze/head movement | | | Body tracking | | | | | | ||
+ | | | | Expressive behaviour | | | | | | | | | ||
+ | |||
+ | Some competencies are dependant on others, such as facial expression which will require a face detection competency to operate. The computer vision competencies are to be concentrated on at first. | ||
- | Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all architectures, | + | ==Links== |
- | Example competencies: | + | [[Lirec Work Packages]] |
- | * Face finding | + | [[http:// |
- | * Expression recognition | + | |
- | * Text to speech | + | |
- | * Obstacle avoidance | + | |
- | YARP |