Skip to content
sdevin edited this page Aug 23, 2017 · 1 revision

Human Monitor

Overview

The Human Monitor module allows to interpret the current world state which contains humans activity information in order to recognize basic humans actions like Pick or Place. This module is, for now, really basic as it is based mainly on distances between humans and objects. However, there is room for improvements by taking into account the context (e.g. the action the agent is supposed to perform) during action recognition or using probabilistic models.

Recognize Actions

For now the module can regognize the actions of the type:

  • Pick
  • Place
  • Drop

Services and topics

Provided services

Published topics

Parameters

  • shouldDetect: true if the detection of actions should be activated (bool)
  • rightHand: name of the joint of the human right hand (string)
  • threshold/pick: distance to objects considered for the detection of pick actions (double)
  • threshold/place: distance to objects considered for the detection of place actions (double)
  • threshold/drop: distance to objects considered for the detection of drop actions (double)
  • threshold/disengage: distance to objects considered to be disengaged from a previous action (double)
  • replacementSupport/"support": name of another support which will replace the original support after the detection of a place action on it (string)