This is an old revision of the document!


LIREC aims to establish a multi-faceted theory of artificial long-term companions (including memory, emotions, cognition, communication, learning, etc.), embody this theory in robust and innovative technology and experimentally verify both the theory and technology in real social environments. Whether as robots, social toys or graphical and mobile synthetic characters, interactive and sociable technology is advancing rapidly. However, the social, psychological and cognitive foundations and consequences of such technological artefacts entering our daily lives - at work, or in the home - are less well understood.

Agent platforms

The platforms currently under consideration for long term companions are:

  • Mobile robots
  • Fixed robots
  • Handheld devices
  • Fixed graphical systems
Migration

An interesting feature of the research is migration between these platforms. Agents which need to build up a long term relationship with their users will have to switch forms depending on the needs of the user at different times. The migration of an agent between devices, and how people relate to it, is a core element of the research.

Scenarios

In order to test and showcase the technology developed for Lirec, several scenarios have been designed to promote companionship. These scenarios are shared between three of the research partners.

Heriot Watt

Spirit of the building:

  1. Team buddy, a mobile robot/collective memory for a team working in a lab
  2. Personal guide - for navigating around a university campus, remembering appointments, telling you where to go
  3. In the wild - a gossip/chat robot - appears on a large screen in a social area
INESC-ID
  1. Game companion for young children
  2. Personal trainer - which can migrate to mobile robot for jogging exercises
  3. Welcome to the jungle - talk to game characters through a robot which can alternate between real and game world
The University of Hertfordshire
  1. Fetch and carry, to help with physical impairment or provide convenience
  2. Cognitive prosthetic - memory aid for tasks
  3. Telepresence card player - a robot mediates play between two people
  4. Teaching proxemic preferences - a robot learns where to be relative to the user in different situations
  5. Travelling companion - agent migration, to stay with user during home, work, shopping

Experimental testing

All scenarios developed are to be experimentally tested and showcased to the public.

Architecture

The technology developed for Lirec is shared between the research partners, and has to:

  1. Run on very different platforms
  2. Reuse code across these platforms
  3. Support migration at runtime between platforms
Existing robot architectures

In the whole, there has historically been a lack of sharing of this kind of technology. This is partly because generalising is hard, considering all types of robots and implementations possible. However, Lirec has to generalise as it's using a wide variety of platforms and needs to share as much as possible.

While some existing architectures are available, none of them address the needs of Lirec, given the focus is:

  1. Long term companionship rather than solving a purely technical problem
  2. Working across different platforms
Methodology

In the past there have been 2 broad approaches to robot design:

  • Hierarchical, model based planning = expensive/difficult to maintain an accurate world state model
  • Behavioural approach = less state, local decisions, but liable to local minima and opaque to program

This can be summed up as “predictive vs reactive”.

The plan is to use a hybrid approach, example BIRON. Where the high level predictive element constrains the reactive in order to combine local decisions with a world model.

The architecture will consist of 3 layers of abstraction:

  • Level 1 - The device layer, hardware drivers
  • Level 2 - platform dependant → logical mappings
  • Level 3 - ION, platform independent

Level 2 will provide a reference architecture with modular capabilities called competencies. Not all competencies will make sense for all platforms, and different implementations of the same competency will be needed for different platforms.

Competencies table

Actuation Sensing
Speech/Sound Visual Movement Object Manipulation Identification Vision Sounds Positioning Distance Internal State
  • Face finding
  • Expression recognition
  • Text to speech
  • Obstacle avoidance

The computer vision competencies are to be concentrated on at first.

YARP is a good example of sharing code in this way on a Linux platform.

  • lirec_notes.1231410284.txt.gz
  • Last modified: 2009-01-08 10:24
  • by 86.166.165.142