This is an old revision of the document!


LIving with Robots and InteractivE Companions

Scenarios

Heriot Watt

Spirit of the building:

  1. Team buddy, mobile robot, collective memory for a lab team
  2. Personal guide - for navigating around a university campus, remember appointments etc
  3. In the wild - a gossip/chat robot - appears on a large screen in social area
INESC-ID
  1. Game companion for young children
  2. Personal trainer (migrate to mobile robot for jogging exercises etc)
  3. Welcome to the jungle - talk to game characters through a robot, robot can alternate

between real and game world

University of Hertfordshire
  1. Fetch and carry, help with physical impairment or convenience
  2. Cognitive prosthetic - memory aid for tasks etc
  3. Telepresence card player - robot mediates play
  4. Teaching Proxemic preferences - robot learns where to be relative to the user in different situations
  5. Travelling companion - agent migration, to stay with user during home, work, shopping.

Architecture

Has to:

  1. Run on very different platforms
  2. Reuse code across these platforms
  3. Support migration at runtime between platforms

Platforms can consist of 4 main types, mobile robot, fixed robot, handheld device or fixed graphical system. Each has it's inherent restrictions.

Existing robot architectures

In the whole, there is a lack of sharing of this kind of technology. This is partly because generalising is hard in this field, considering all types of robots possible. However, Lirec has to generalise as it's using a wide variety of architectures.

NASREM/NIST RCS - NASA + ESA use a generic system with their subcontractors.

Methodology

In the past there have been 2 broad approaches to robot design:

  • Heirarchical, model based planning = expensive to maintain accurate world state model
  • Behavioural approach = less state, local decisions, liable to local minima, opaque to program

This can be summed up as predictive vs reactive.

Current thinking is to use a hybrid approach, example BIRON. Where the predictive constrains the reactive to combine local decisions with a world model.

The architecture will consist of 3 layers of abstraction:

  • Level 1 - device layer, architecture dependant
  • Level 2 - architecture dependant → logical mappings
  • Level 3 - ION, device independant

Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all architectures, and different implementations of the same competency may exist.

Example competencies:

  • Face finding
  • Expression recognition
  • Text to speech
  • Obstacle avoidance

YARP

  • lirec_notes.1231253490.txt.gz
  • Last modified: 2009-01-06 14:51
  • by davegriffiths