麻豆传媒

 

Research in motion

- March 17, 2009

Prof. Aaron Newman and research assistant Hazlin Zaini go through the motions of sign language in front of a green screen. (Nick Pearce Photo)

Gollum. King Kong. Jar Jar Binks. And nearly every character in modern video games.

Motion capture technology has advanced dramatically in the past decade, to the point where digital characters in film and gaming are approaching photo-realism. But Aaron Newman sees the technology鈥檚 potential for more than just entertainment. The 麻豆传媒 psychologist and Canada Research Chair in Cognitive Neuroscience is using motion capture to help better understand sign language and other forms of gesture-based communication.

鈥淪ign languages use the same abstract rules and patterns as spoken languages, but they鈥檙e coming in through an entirely different channel鈥攕ight as opposed to sound,鈥 he says. 鈥淗ow does that shape your brain and its capacity for human language?鈥

One of the challenges in studying the effects of sign language on the brain is that human gestures are often aided or affected by other stimuli like facial expressions. To overcome this, Dr. Newman decided he needed to prepare his own short videos that strip away everything but the most basic movement. These videos would then be shown to study participants hooked up to an EEG system that monitors brain activity.

鈥淲e can see instantly how people react, within milliseconds,鈥 he says. 鈥淲e see the blips of activity coming from different areas of the brain, and that helps us better understand the processes by which people understand these gestures. We want to know where and when symbolic communication crosses the threshold into full-blown language in the brain.鈥

To get the clarity he required in these videos, Dr. Newman used funding from NSERC to purchase a professional motion capture system from a Fredericton company called Measurand, whose major clients are in the film and gaming industries. (Their equipment has even been used by NASA). Just like you鈥檇 see on a behind-the-scenes DVD feature, Dr. Newman and his team hooked their assistants up with dozens of fibre-optic sensors and tracked their movement down to the pinky finger.

But Dr. Newman鈥檚 lab is full of psychology students; not the programming experts needed to turn all this data into useable video. Coincidentally, Measurand鈥檚 animation director Carl Callewart was hosting a training seminar on motion capture animation for students at the Centre for the Arts and Technology in Halifax. The two decided to partner on the project. The seminar took place over three days, during which the 20 students prepared over 80 short video clips from Dr. Newman鈥檚 motion capture data.

Over the next several months, Dr. Newman plans to test the videos on both sign language users and people who don鈥檛 know sign language at all, and he鈥檚 already put together a second set of motion capture data that he hopes to turn into another set of animations.聽 He is also planning on using the motion capture data in collaboration with researchers at the National Institute for Deafness and Communication Disorders in the U.S.


Comments

All comments require a name and email address. You may also choose to log-in using your preferred social network or register with Disqus, the software we use for our commenting system. Join the conversation, but keep it clean, stay on the topic and be brief. Read comments policy.