Motivation



In order to participate in embodied interaction with humans, social robots must be able to recognize relevant social patterns, including interaction rhythms, imitation, and particular sequences of behaviors, and to relate them to particular socially meaningful interaction schemas. In this project we try to measure and quantify this by observing and recording interaction between humans doing shadow puppetry. Shadow puppetry provides a medium of interaction that is expressive enough to observe the phenomena that we are interested in and limited enough that the task of capturing and modeling the behavior of the paticipants is tractable.

Recording embodied interaction between humans.

















Methods



We use the recorded data to build false interaction sequences by randomly stitching the left and right sides of real sequences together. In the tradition of the turring test, we attempt to quantify the socially intelligent behavior using human evaluation. We ask humans to watch both real and false video sequences and determine whether the video clip shows a real or false interaction. The frequency with which a video is rated as a real interaction provides our measure of interactivity.



Next we extract from each video sequence a stream of behavior primitives that represent the basic tokens of shadow puppetry. We treat the behavior of each of the participants at any instant as a random variable and build distributions to model the generative process their of interaction. We have built a behavior recognition system to automatically convert the video stream into gestural tokens in real time.

get the flash player to see this player.

The perceptions system of our robot codes the gestural tokens of a human in real-time




























Experimental Evaluation



We evaluate our models by using them as controllers in an embodied human-robot interaction experiment. We do this by sampling from the distributions based on the behavior of the human. We use a 4DOF Barrett Whole are Manupilator (WAM). Our WAM is instrumented with the ability to execute the same gestural tokens as the human. Each behavior is performed by following a predefined trajectory that connects points in the joint space of the WAM. We also provide the WAM with a perception system trained to recognize the tokens in our gestural language. Subjects are asked to interact with the robot using different controllers and evaluate the interaction by answering a short set of survey questions.
get the flash player to see this player.

The robot observes the behavior of the human and generates interactive behavior by sampling from learned joint distribution.