Learning by imitation: a new paradiam


 If the robots could imitate what humans demonstrate to them, obtaining novel robot behaviors would be trivial. However developing a generic imitation system that can mimic visually presented behaviors is very hard. Current imitation systems have non-negligible limitations; either in the complexity of target task or the precision that can be obtained. Here we propose a novel imitation learning paradigm for robot skill synthesis, which does not have these limitations. This new paradigm exploits human motor learning capacity. The idea is to consider the target robot platform as a tool that can be intuitively controlled by a human. Once the robot can be effortlessly controlled, the target behavior can be obtained by the human on the robot through practice. The successful execution of the task by the human via the robot means that the required control commands have been discovered by the human, and can be used for designing controllers that operate autonomously.

 This paradigm places the initial burden of learning on the human instructor, but allows the robot to ultimately acquire the ability to perform the complex skills with precision without human guidance. Humans are adept at learning to use tools and control new devices. After a period of practice people typically find themselves able to accommodate the dynamic properties of various devices without conscious effort, and are for example, able to naturally assimilate the relationship between the movements of their limbs and the dynamics of the car they are driving. We have highlighted this fact in previous work [1, 2], and this is well supported by neurophysiological experiments -primates have very plastic representations of limbs, which are expanded immediately upon acquisition of tools [3, 4].

 In short, our framework aims to exploit this human capability in order to obtain robot controllers for complex motor tasks. This paradigm sees the construction of a motor controller into two phases: (i) a human operator performs the task, possibly after practice, via an intuitive interface, and subsequently (ii) the human generated robot motions are used as data points to construct an independent controller.

The Proposed Paradigm at Work: Ball Swapping

play video

enlarge picture

 The ball swapping task is defined as the manipulation of two balls (Chinese healing/health balls) in one’s hand such that their locations are swapped as in illustrated in IMG.1 (left panel). Humans naive to this task can easily perform the swapping smoothly after a short practice. We thought that the 16 DOF Gifu Hand III (Dainichi Co. Ltd., Japan) must be able to do this, and we decided to test the proposed paradigm with Gifu Hand ball swapping task.

 The real-time control of the robotic hand by the human operator was achieved using an active-marker motion-capture system(see the movie above). The real-time control system allowed a very intuitive control of the robot fingers. The key factor to achieve an intuitive interface was the anthropomorphic nature of the robot hand, and the anthropomorphic mapping between human and robot fingers. With this interface subjects could control the robot hand as if it were their own. They could grasp or point effortlessly with their ‘robot hand’. This suggested that the robot was subsumed in their body schema. After this stage, a human subject was asked to operate the robot hand in order to complete one cycle of the ball swapping task.

play video

enlarge picture

 The learning phase took one week of training, (approximately 2 hours per day) after which the subject was able to swap the balls with the robot hand without dropping them(see the movie). This performance is then used as it is or after off-line speed and smoothness improvements to obtain an autonomous open loop controller for ball swapping. One cycle of such autonomous performance is illustrated in IMG.2(right panel). The further details of the implementation can be found in [1, 2].


[1] Oztop, E., Lin, L.-H., Kawato, M., & Cheng, G. Dexterous skills transfer by extending human body schema to a robotic hand, In IEEE-RAS International Conference on Humanoid Robots, pp. 82-87, Genova, Italy (2006).

[2] Oztop, E., Lin, L.-H., Kawato, M., & Cheng, G. Extensive human training for robot skill synthesis: Validation on a robotic hand, In IEEE International Conference on Robotics and Automation, pp. 1788-1793, Rome, Italy (2007).

[3] Iriki, A., Tanaka, M., Iwamura, Y. Coding of Modified Body Schema During Tool Use by Macaque Postcentral Neurones, In Neuroreport, vol.7, pp. 2325-30 (1996).

[4] Obayashi, S., Suhara, T., Kawabe, K., Okauchi, T., Maeda, J., Akine, Y., Onoe, H., & Iriki, A. Functional Brain Mapping of Monkey Tool Use, In Neuroimage, vol. 14, pp. 853-61 (2001).