Updates and Conclusions

Ultimately, we needed to change directions from our previous research. The past four months, I have been working with Deannia Lucas and, more recently, Meaghan Lidd on analyzing the transition time between signs in American Sign Language.

Read More

Obtaining New Data

Due to scheduling conflicts and a lack of data, we have been unable to make significant progress on our mouth movement research for the sign language avatar. Instead, we have shifted focus to data that is more readily avaiable. The same annotated database we were using for mouth movements has several other tiers of annotations. We are now focusing on the amount of time between main glosses. AT first, I thought we could isolate the spaces by subtracting the main gloss tier from the English translation tier. Through ELAN, we can process subtractions on multiple files at once, so it seemed like the perfect route. It initially appeared that the English translations spanned the entire length of the videos, however, upon further inspection, I found gaps between the annotations. Thus, I needed a new method to retrieve the gaps between the main glosses.

Read More

The Original Goal

The goal of the DREAM project is to analyze video data of stories being signed in American Sign Language, focusing specifically on the movement of the mouth, to discover any characteristics governing its movement. Upon analysis, we hope to discover defining characteristics of mouthing and mouth gestures in ASL. The movement of the mouth can be very controversial in Deaf communities. However, researchers have generally classified the movement into two categories: mouthing and mouth gestures. Mouthing refers to the articulation of visual syllables that resemble a spoken word on the mouth. Mouth gestures refer to movements on the mouth that have nothing to do with the spoken language. If we are able to find specific rules governing the mouth, we can program those rules into our avatar to automate the appropriate facial grammar in our translations. Though we have a lot of annotation data for the face and body, there are little to no annotations for the mouth movement. With fellow researchers, Deannia Lucas and Maria Saenz, we plan to use Open Pose to see if we can extract more data.

Read More

Expressions

A common critique of sign language avatars is that they don’t have enough movement on the face. It is preferred that the avatars be as natural as possible to increase legibility and comfort for users. In order to decrease the woodenness of the avatar, it is essential to add facial expressions. This will inject life and meaning into our translations. Thus, I was tasked with creating facial expressions for our avatar.

Read More

Expression Builder Tutorial

Within the American Sign Language Project at DePaul, we are lucky enough to be able to work with custom software developed by our own Dr. Rosalee Wolfe and Dr. John McDonald. One such program is the Expression Builder. Here is where we create any expressions or mouthings for our avatar. I have spent time creating expressions, and thus, I was tasked with creating a tutorial presentation of the program.

Read More