Imitation learning is useful to endow robots with skills that are difficult, if not impossible, to program by hand.
For example, a golf swing movement that exploits the redundancy of a 7 degree-of-freedom arm, or a collaborative skill that must be coordinated with the movement of a human partner.
Kinesthetic teaching and teleoperation are now widely accepted methods to provide demonstrations for imitation learning, mainly because they avoid the correspondence problem.
However, these methods are still far from ideal.
In human-robot collaboration, kinesthetic teaching is disruptive and natural interactions cannot be demonstrated.
When learning skills, the physical embodiment of the robot obstructs truly optimal and natural human demonstrations.
Ideally, robots should learn simply by observing the human.
Direct observations pose the problem that a movement that can be demonstrated well by a human may not be kinematically feasible for robot reproduction.
In this paper  we address this problem by using
stochastic search to both find the appropriate location of the demonstration reference frame with respect to the robot, and to adapt the demonstrated trajectory, simultaneously.
This means that a human demonstrator can show the skill anywhere without worrying if the robot is capable or not of reproducing it kinematically.
Our optimizer aims at finding a feasible mapping for the robot such that its movement resembles the original human demonstration.
Later, we used this method to generate human-like movements that also address the ergonomics of the human partner. Check this post.
. Maeda, G.; Ewerton, M.; Koert, D. & Peters, J. Acquiring and Generalizing the Embodiment Mapping From Human Observations to Robot Skills IEEE Robotics and Automation Letters, 2016, 1, 784-791. pdf here.