Interaction ProMPs (IJRR 2017, AURO 2016, HUMANOIDS 2014)


Interaction Probabilistic Movement Primitive (Interaction ProMP) is a probabilistic framework based on Movement Primitives that allows for both human action recognition and for the generation of collaborative robot policies. The parameters that describe the interaction between human-robot movements are learned via imitation learning. The procedure results in a probabilistic model from which the collaborative robot movement is obtained by (1) conditioning at the current observation of the human, and (2) inferring the corresponding robot trajectory and its uncertainty.

The illustration below summarizes the workflow of Interaction ProMP where the distribution of human-robot parameterized trajectories is abstracted to a single bivariate Gaussian. The conditioning step is shown as the slicing of the distribution a the observation of the human. In the real case, the distribution is multivariate and correlates all the weights of all demonstrations.


These are some related publications

  • Maeda, G.; Ewerton, M.; Lioutikov, R.; Ben Amor, H.; Peters, J. & Neumann, G. Learning Interaction for Collaborative Tasks with Probabilistic Movement Primitives Proceedings of the International Conference on Humanoid Robots (HUMANOIDS), 2014, 527-534. [pdf here].
  • Maeda, G.; Neumann, G.; Ewerton, M.; Lioutikov, R.; Kroemer, O. & Peters, J. Probabilistic movement primitives for coordination of multiple human–robot collaborative tasks Autonomous Robots, 2017, 41, 593-612. [pdf here].
  • Maeda, G.; Ewerton, M.; Neumann, G.; Lioutikov, R. & Peters, J. Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration International Journal of Robotics Research (IJRR), Accepted. [pdf here].

This code in Matlab shows a simple toy problem example where an observed agent with two degrees-of-freedom (DoF) is trained together with an unobserved agent, also with two DoFs. The observed agent could be the human, and the unobserved agent the robot. Note that to collect training data we assume both agents are observed. This means that we will learn the initial distribution by demonstrations. Once the model is learned (the green patch in the figure), we can observe only the human (the two blue dots) to find out a posterior distribution (the red patch), which can be used to control a robot.conditioning.png

This video shows the vanilla implementation of multiple Interaction ProMPs running on an assembly task. Note that the robot response is quite slow as the human has to wait for the action recognition. The related papers are the HUMANOIDS 2014 and the AURO 2016.

We improved the robot response by proposing a probabilistic method to estimate the phase of the human as he/she moves. A simple version of this method is described in this IJRR 2017 paper and a more sophisticated version that can also address incomplete observations can be found in this IROS 2015 paper. The video next shows the result with phase estimation.

In our quest to make the interaction as fluid as possible, we also considered predicting the possible sequences of collaborative actions by constructing a lookup table with many variations of an assembly task. Interaction ProMPs’ action recognition are used with nearest-neighbor to search for the most probable sequence. This method was presented in this AAAI sympoium paper here. The video is shown below.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s