- Abstract
The purpose of this research is to make a robot assist the human and
perform a task with him cooperatively.
In order to assist the human, a robot must autonomously recognize the
human's motions in real time .
The vision is the most useful sensor for this purpose. The robot
recognizes the current target objects and the grasp configuration of
the human by vision, and must plan and execute the needed assistant
motion based on the task purpose and the context.
In this research, we tried to solve the above problems.
We defined the abstract Task-Model, analyzed the human demonstration
by using events and a event stack, and automatically generated the
Task-Models needed in the assistance by the robot.
The robot planned and executed the assistant motions based on the
Task-Knowledge in the cooperation with the human by analyzing the
human motions.
We implemented the 3D object recognition system by using the
appearance based and minute templates based matching and the sub-pixel
stereo vision.
The effectiveness of these methods was tested through an experiment in
which the human and the robotic hand assembled toy parts in cooperation.