Publications
JST/ICORP
ATR/BIC
ATR/CMC/CCC
|
Go to Communication and Cognitive Cybernetics (CCC)
home
page
|
The Gifu Hand III (Dainichi Co. Ltd.,
Japan) is a 16-DOF, 20 joints
dexterous robot hand with four
degrees of freedom (DOF) at the thumb and 3-DOF on each finger.
With the Gifu Hand, I study several
topics including skill learning, imitation and object affordance
learning. |
Human
visuomotor
learning
for robot skill synthesis :
Dexterous manipulation
This study explores how the human
visuomotor learning ability can be utilized for obtaining dexterous
manipulation
and movement capabilities on robots (see also item 4 below). For
example an effortless ball manipulation via realtime control of the
Gifu Hand can be seen here (~10Mb
QuickTime). A more challenging task is to rotate the so
called Chinese healing balls without dropping them. With
training, the robot hand is integrated into human 'body schema'
allowing the subject to perform this task with the robot
hand. Here
is a movie (12Mb QuickTime)
or (12Mb mpeg1) showing the
obtained skill with this paradigm. This basic skill then can
be tuned to improve
performance (e.g. speed) as shown here
(8Mb
mpeg1).
Self-observation
and
auto-association
as route to simple
imitation
In the previous years, we have
explored the associative memory hypothesis
of
imitation bootstrapping with the Gifu Hand. Click for a demo movie (17Mb mpeg1).
Application
to
Brain
Machine Interface
Collaborating with Honda and
neuroscientists at ATR/CNS, we
employed the Gifu Hand in a brain-machine-interface (BMI) project.
Using fMRI, human subjects' brain activity are mapped to one of
rock/scissors/paper hand postures that are replicated on the Gifu
Hand in near real-time. Take
a
Google
search
on the project.
|
Realtime
full
body
robot
control of HOAP-II, a small humanoid robot (Fujitsu,
Japan) |
Human
visuomotor
learning
for robot skill synthesis: Reaching while
keeping static balance
This is the extension of
the 'human visuomotor learning for robot skill synthesis' paradigm to
full body humanoid robots. This is a collaborative work with Jan Babic
at Jozef Stefan Institute, Slovenia and Joshua Hale at ATR, Japan. Here (~15Mb QuickTime) is the
human human control of the robot, where the subject was asked to keep
the robot balanced while tracing a trajectory with his finger. The data
collected is used to derive a balanced reaching skill. Here (~11Mb QuickTime) this skill
is used to have the robot trace an elliptical trajectory.
|
Actuated 3-DOF platfrom
built by Jan Babic at Jozef Stefan Institute, Slovenia
|
Improving the human
visuomotor learning for robot skill synthesis paradigm
This platform can carry a
human. The idea is this: the subject controlling a humanoid robot will
'ride' the platform and 'feel' how the robot feels in terms of the
dynamics of the center of mass of the robot. Here (~12Mb mpeg1) the force
control of the platform can be seen.
|
The
separation
induced
by
a higher order neuron (a polynomial) for a
dichotomy of the corners of the 3 dimensional cube.
|
Representation of Boolean functions
(dichotomies over the n-cube) using polynomials (higher order neurons)
with a small number of monomials (fan-in).
Higher-order
neurons
or
sigma-pi
units are extensions of linear neuron models, which
capture the nonlinearity in the input-output relation of a mapping
using products of input variables, called the monomials. The net input
to a higher-order unit is the sum of the monomials weighted by
adjustable parameters. The output is obtained by the application of a
predefined activation function, usually a sigmoidal function, or a
threshold function to the net input. There are many aspects of
this powerful model that deserves attention. My main interest is to
study the number of monomials that a higher order neuron would require
to solve a given classification. More generally; given a set of
classification problems what is the minimum number of monomials that
can solve the given problem set? Recently, I have showed that any dichotomy of the n-cube
can be realized with 0.75*2n or less monomials. This is the
best bound known so far. Here is
the reprint that has the proof of this claim.
|
DB,
the robot used in human-robot interaction experiments |
Motor
interference:
an
objective tool to test the extent that a
robot is perceived as human-like
It is generally
accepted that (humanoid)
robots will become part
of out daily lives. So it is important to understand how well they will
be
accepted as social partners. In this direction, we have adopted the motor interference effect observed
in human-human interactions to study study the
human perception of robots as social partners. Motor interference
refers to the differential effect of observing an action while
performing a compatible or an incompatible action. An example of
a
compatible and incompatible movement pair is the
vertical and. lateral hand movements. We have recently
shown that a humanoid robot (DB) moving similar
to a human elicits
motor interference. We now are conducting experiments to tease apart
the contribution of motion
and form to this
reaction. To get idea
of the experiment setup click
here
(4Mb
mpeg1) .
|
Activity maps (each of the small thumbnail
images) of the units that model the AIP neurons. Each map is
constructed by gradually changing the affordances of the presented
object.
|
Grasp
Affordance Learning
Grasp Affordance refers to the
intrinsic
features of an object that are
relevant for grasping. For example the color of pen, in general, is not
part of its grasp affordance because it does not guide the grasping
behavior. In macaque monkeys the parietal area AIP appears to be
involved in affordance extraction. AIP with the ventral premotor
cortex (F5) forms the core of the monkey grasping circuit. Recently I
developed a model for AIP neurons which is based on the hypothesis that
early grasping of
infants (being mediated by other mechanisms) provide the learning data
points for F5-AIP complex to learn a mapping from visual->motor
representation. The critical test is then to see whether this
visuo-motor learning leads to the emergence of unit responses that are
comparable to actual AIP neurons. The simulation results show that this
is correct. The future research plan is to compare
the
modeled AIP unit activities with AIP neuron discharge profiles in a
quantitative way.
|
The cortical grasp planning and execution
circuit of macaque monkeys.
|
Mirror
Neurons
and
Imitation
According
to the general
opinion, high level functions such as imitation, action
understanding and (precursors of) language are attributed to
mirror neurons. However it
is not clear how much the human mirror system has evolved to support
imitation and language, if indeed there is a connection between these
skills and the mirror neurons. Furthermore the number of studies that take a
computational viewpoint to study these hypothesis is limited. Recently,
guided by my earlier modeling of mirror neurons and mental state
inference mechanisms I have made a meta-analysis of the
computational
models (that can be seen as models of mirror neurons) and current
opinions about mirror neuron function. Here is the reprint.
Older
Projects (applets)
Ph.D. and related
links
|