Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
lirec_architecture [2009-01-15 11:56] – davegriffiths | lirec_architecture [2009-01-15 12:08] – davegriffiths | ||
---|---|---|---|
Line 52: | Line 52: | ||
====Coping/ | ====Coping/ | ||
- | Similar to appraisal, but in reverse - what should be done given the agent' | + | Similar to appraisal, but in reverse - what should be done given the agent' |
====Affective State==== | ====Affective State==== | ||
Line 68: | Line 68: | ||
====Models of others (Theory of Mind)==== | ====Models of others (Theory of Mind)==== | ||
+ | |||
+ | This is where the information regarding the agents and humans that the agent has met are stored. Each model will consist of a similar architectural form as the agent itself. This means that in order to estimate what another agent will think of an action, it can run an appraisal using it's model of the other agent, and look at the changes to it's affective state. The idea is that as more information is gathered (the more the agent gets to know it's user) the better these estimates will become. | ||
+ | |||
+ | The Lirec model is only recursive to one level, i.e. it does not attempt to model other agent' | ||
+ | |||
====Long term memory==== | ====Long term memory==== | ||
- | * Forgetting | + | |
+ | The long term memory is where the agent stores it's information on it's history of interactions with the user and other agents. This memory should be linked to all the decision making in the agent. | ||
====Short term memory==== | ====Short term memory==== | ||
+ | Where the agent keeps it's current goals, plans and action rules. |