Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
lirec_notes [2009-01-06 15:58] – davegriffiths | lirec_notes [2009-01-08 10:40] – davegriffiths | ||
---|---|---|---|
Line 1: | Line 1: | ||
==== LIving with Robots and InteractivE Companions ==== | ==== LIving with Robots and InteractivE Companions ==== | ||
- | LIREC aims to establish a multi-faceted theory of artificial long-term companions (including memory, emotions, cognition, communication, | + | < |
- | + | ||
- | ==Links== | + | |
http:// | http:// | ||
- | |||
- | [[lirec work packages]] | ||
==Agent platforms== | ==Agent platforms== | ||
Line 62: | Line 58: | ||
==Methodology== | ==Methodology== | ||
- | In the past there have been 2 broad approaches to robot design: | + | In the past there have been 2 broad approaches to [[robot design]]: |
* Hierarchical, | * Hierarchical, | ||
* Behavioural approach = less state, local decisions, but liable to local minima and opaque to program | * Behavioural approach = less state, local decisions, but liable to local minima and opaque to program | ||
Line 70: | Line 66: | ||
The architecture will consist of 3 layers of abstraction: | The architecture will consist of 3 layers of abstraction: | ||
- | * Level 1 - The device layer, hardware drivers | + | * Level 1 - The device |
- | * Level 2 - platform dependant -> logical | + | * Level 2 - Logical |
* Level 3 - ION, platform independent | * Level 3 - ION, platform independent | ||
Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all platforms, and different implementations of the same competency will be needed for different platforms. | Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all platforms, and different implementations of the same competency will be needed for different platforms. | ||
- | Example competencies: | + | Competencies table |
- | * Face finding | + | ^ Actuation ^^^^ Sensing ^^^^^^ |
- | * Expression recognition | + | | **Speech/ |
- | | + | | Text to speech |
- | * Obstacle avoidance | + | | Non-verbal sounds | Lip sync | Follow person | | Obj recognition | Gesture recognition (simple set) | Non-verbal sounds | Locate person | Locate obj | Competence execution monitoring | |
+ | | | Facial expression | Path planning | | | Emotion recognition (simple set) | | | User proximic distance sensing/ | ||
+ | | | | Gaze/head movement | | | Body tracking | | | | | | ||
+ | | | | Expressive behaviour | | | | | | | | | ||
- | The computer vision competencies are to be concentrated on at first. | + | Some competencies are dependant on others, such as facial expression which will require a face detection competency to operate. |
+ | |||
+ | ==Links== | ||
+ | [[Lirec Work Packages]] | ||
[[http:// | [[http:// | ||