In a small, sparse makeshift lab, Melissa Malzkuhn practices her range of motion in a black, full-body unitard dotted with light-reflecting nodes. She's strapped on a motion capture, or mocap, suit. Infrared cameras that line the room will capture her movement and translate it into a 3-D character, or avatar, on a computer.

But she's not making a Disney animated film.

Three-dimensional motion capture has developed quickly in the last few years, most notably as a Hollywood production tool for computer animation in films like Planet of the Apes and Avatar.

Behind the scenes though, leaders in the deaf community are taking on the technology to create and improve bilingual learning tools in American Sign Language. Malzkuhn has suited up to record a simple nursery rhyme. Being deaf herself, she spoke with NPR through an interpreter.

"I know in English there's just a wealth of nursery rhymes available, but we really don't see as much in ASL," she says. "So we're gonna be doing some original work here in developing nursery rhymes."

That's because sound-based rhymes don't cross over well into the visual language of ASL.

Malzkuhn heads the Motion Light Lab, or ML2. It's the newest hub of the National Science Foundation Science of Learning Center, Visual Language and Visual Learning (VL2) at Gallaudet University, the premier school for deaf and hard of hearing students.

Using high tech to translate research into tangible educational products has been the goal of ML2's all-deaf team since it launched in 2009.

In its latest venture, the team wants to pair original ASL rhymes with a 3-D signing avatar, a concept that could eventually be rendered into a signing cartoon animal on a kids show, for example.

Malzkuhn's movements are mapped onto an avatar.

Malzkuhn's movements are mapped onto an avatar.

Emma Bowman/NPR

The lab's new equipment was funded by a grant VL2 recently received through the Keck Foundation. Part of the lab's role is to design visual stimuli for a 6- to 10-month-old to study how and when deaf infants are ready to learn.

Though the hearing are auditory and the deaf are visual, how we process the language goes to the exact same center of our brain, Malzkuhn says.

"I think often, people don't realize that, which leads to the assumption that deaf people are lacking something," she says. "People think, 'How can you learn a language if you don't hear it?' "

But VL2 research has shown that language delay can have negative outcomes later in life. Early exposure to bilingualism and learning through narrative structure are critical to cognitive development for both the deaf and non-deaf. A recent study, which measured the effects of watching Sesame Street, supports the idea that narratives can have a positive, lasting impact on kids academically when delivered at an early age. Nursery rhymes in particular can aid children's memory and prediction skills.

ML2's largest strength is its storytelling, Malzkuhn says. And for the hard of hearing, that means storytelling must be visual and integrated early on.

Most kids learn their ABC's through the classic song, but English nursery rhymes don't translate well to ASL. The sign language grammar structure is much different from that of English.

"Think about it," Malzkuhn says. "English has a very sound-based way of rhyming [...] and it helps them to be able to think and memorize things and then express language because they're used to hearing those patterns and rhythms of the language."

To translate rhymes to a non-sound based language, she adds, the team keeps repetitive rhythms available through the use of common handshapes.

Motion capture tracks these "temporal rhythms" of hand gestures and reflects the data on a dual monitor like a polygraph, which acts as a blueprint for the 3-D signing avatar. (In ASL, signage and facial expressions work to translate what might be compared to vocal intonations in English.)

Jason Lamberton, ML2's tech consultant, points to the motion capture marker's position on screen. The left screen displays a graph featuring the temporal rhythms of a simple nursery rhyme suited for a 6- to 18-month-old: a boat on a wave.

Jason Lamberton, ML2's tech consultant, points to the motion capture marker's position on screen. The left screen displays a graph featuring the temporal rhythms of a simple nursery rhyme suited for a 6- to 18-month-old: a boat on a wave.

Emma Bowman/NPR

The team has also experimented with virtual reality devices like Oculus Rift and Kinect to take advantage of the 3-D landscape.

"Sign language is a 3-D language," Malzkuhn says. "We use the space in front of us, the space around us."

Stacy Abrams, who coordinates a deaf mentorship program in Arizona, thinks capturing the 3-D data in its "truer form" would improve a child's learning.

"You can see the motion better. You can see the rhythm," she says. "Some parents struggle with handshapes or fingerspelling."

Take the tricky example of the word for "chair" versus "sit." Both have the same handshape in ASL, but the sign for "chair" has a smaller double movement, while "sit" has a larger, single movement.

Each member of ML2's deaf team brings together a specific area of discipline. Jason Lamberton (from left) is the tech consultant, Melissa Malzkuhn is the director and Yiquiao Wang, the lab's artist in residence, designs the art for all the storybook apps.

Each member of ML2's deaf team brings together a specific area of discipline. Jason Lamberton (from left) is the tech consultant, Melissa Malzkuhn is the director and Yiquiao Wang, the lab's artist in residence, designs the art for all the storybook apps.

Emma Bowman/NPR

"3-D would have to clarify that," Abrams adds. "3-D will help understanding structure and grammar within sign language. It's closer to a real live person."

ML2 has also been developing bilingual storybook apps for the iPad. Signers can analyze facial expressions, expand vocabulary or mimic the signs within the tablet app.

Even if the parents are new ASL signers, they can access the story through the English component. More than 90 percent of deaf children are born to hearing families.

"It's a huge process for a hearing person to first learn that they have a visually oriented child, and how do they best provide the access to the child," Malzkuhn says. "I think that's where we as a research hub come in."

Gallaudet has a rich community and a history of generations of people who are deaf. But some deaf people don't sign, which can limit their ability to be part of the culture.

Deaf mentor Stacy Abrams has seen what kind of impact limited accessibility can have on deaf kids. The families she works with often have no access to ASL or deaf mentors, and some of the children have never seen other deaf children.

But when they are able to borrow or check out an iPad, apps like ML2's are a gateway to language accessibility.

The Baobab is one of ML2's original bilingual storybook apps.

The Baobab is one of ML2's original bilingual storybook apps.

Courtesy of VL2

Arthur, a 3-year-old boy with hearing parents that Abrams mentored, came from a rural area in New Mexico and refused to look her in the eye.

"He was very shy, not understanding how to use eye contact to communicate" since he had been refusing to use sign language, she says.

When she showed him the storybook apps, he was hooked. So she added the iPad as a tool to get his attention. Before she would hand over the tablet, she had him copy her signs and look her in the eyes.

"This engaged him in learning, but it's not enough to leave him alone without community," Abrams says. "He moved to the deaf school and is making progress."

Copyright 2015 NPR. To see more, visit http://www.npr.org/.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate