This is an old revision of the document!


Lirec Architecture

Part of Project Lirec

The technology developed for Lirec is shared between the research partners, and has to:

  1. Run on very different platforms
  2. Reuse code across these platforms
  3. Support migration at runtime between platforms

In the whole, there has historically been a lack of sharing of this kind of technology. This is partly because generalising is hard, considering all types of robots and implementations possible. However, Lirec has to generalise as it's using a wide variety of platforms and needs to share as much as possible.

While some existing architectures are available, none of them address the needs of Lirec, given the focus is:

  1. Long term companionship rather than solving a purely technical problem
  2. Working across different platforms

In the past there have been 2 broad approaches to robot design:

  • Hierarchical, model based planning = expensive/difficult to maintain an accurate world state model
  • Behavioural approach = less state, local decisions, but liable to local minima and opaque to program

This can be summed up as “predictive vs reactive”.

The plan is to use a hybrid approach, example BIRON. Where the high level predictive element constrains the reactive in order to combine local decisions with a world model.

The architecture will consist of 3 layers of abstraction:

  • Level 1 - The device api layer, the existing hardware drivers
  • Level 2 - Logical mappings of devices into competencies
  • Level 3 - ION, platform independent

Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all platforms, and different implementations of the same competency will be needed for different platforms.

Actuation Sensing
Speech/Sound Visual Movement Object Manipulation Identification Vision Sounds Positioning Distance Internal State
Text to speech Gesture execution Move limb Grasp/Place object User recognition Face detection Speech recogn Localization Obstacle avoidance Battery status
Non-verbal sounds Lip sync Follow person Obj recognition Gesture recognition (simple set) Non-verbal sounds Locate person Locate obj Competence execution monitoring
Facial expression Path planning Emotion recognition (simple set) User proximic distance sensing/control
Gaze/head movement Body tracking
Expressive behaviour

Some competencies are dependant on others, such as facial expression which will require a face detection competency to operate. The computer vision competencies are to be concentrated on at first.

  • Forgetting
  • lirec_architecture.1231850622.txt.gz
  • Last modified: 2009-01-13 12:43
  • by davegriffiths