Hello! I am a Master’s student at DePaul University, researching with the American Sign Language Project. My collegiate carreer began at the University of California, Santa Barbara, where I earned my Bachelor’s in Theater with summa cum laude distinction. In my junior year, my studies abroad brought me around the world on Semester at Sea. It was my first experience traveling outside the US, and I was priviledged to visit 12 countries: Mexico, Japan, China, Vietnam, Myanmar, Singapore, India, Mauritius, South Africa, Ghana, Morocco, and England. It was a humbling and eye opening experience.
Upon graduation from UCSB, I moved across the country to Chicago where I began working as a flight attendant. Like my study abroad experience, I was able to travel and experience many different people and cultures. Once I was furloughed, due to COVID, I turned my attention to volunteering, working with a local animal shelter as a foster coordinator. I have always felt drawn to helping and caring for others, and trying to affect change. I decided I could have more of an impact if I broadened my skillset, and thus, I began pursuing my Master’s in Computer Science.
Not long after starting the program, I found the American Sign Language Project. I reached out to Dr. Wolfe to ask if I could help on the project. I had taken three years of sign language in high school and I thought my Theater degree would be of some use in helping the avatar be more realistic. Fortunately, Rosalee agreed, and I was put to work. A few months later, I applied and was rewarded this DREAM scholarship. Rosalee graciously offered to be my research advisor and is now advising this project.
For the future, I have dreams of working on a humanitarian project, opening a female led video game studio, or continuing down the research path and earning my Doctorate.
About My Advisor
Dr. Rosalee Wolfe
As the Division Director of Human-Computer Interaction and Computer Graphics, Dr. Wolfe was instrumental in establishing DePaul’s degree programs in Human-Computer Interaction, and Computer Graphics and Animation. After earning a Ph.D. in Computer Science from Indiana University, she was a NASA fellow at the Johnson Space Center, served on various committees of ACM SIGCSE and SIGGRAPH, held fellowships at Sony Imageworks, University of Hamburg, is a Fulbright Scholar and the team lead for the American Sign Language Project.
Research Area: Graphics / Animation, Human Computer Interaction
Specific Research Area: Character animation to support communication between Deaf and hearing communities; accessibility; Graphics pedagogy
A special thank you to Dr. McDonald who was an advisor in all but official title. Thank you as well to Jacob Furth who mentored my original 10 weeks.
About My Project
The American Sign Language Project’s goal as a whole is to create an automatic sign language interpreting avatar. Deaf communities have been excluded from technological advances in communication, and there is a distinct lack of access to information in their own language. It is important to bridge the gap to increase accessibility for all.
The original goal of the DREAM project was to analyze video data of stories being signed in American Sign Language, focusing specifically on the movement of the mouth, to discover any characteristics governing its movement. However, there was not enough data on the mouth movement. Instead, we turned our focus to the timing of ASL, specifically, how much time there is between signs. We took the same video data we had for the mouthing, but analyzed a different tier of information. Using the gaps in between the dominant hand annotations, we collected the transition times across all videos. We exported and analyzed the data, sorting the signs into specific categories, and creating bigrams from one category to another. We analyzed the relationships between the categories themselves, but also the category bigrams. Ultimately we discovered that a sign’s fucntion does indeed impact it’s transition time to and from another sign. This information will be helpful in informing how timing is implemented in a sign language acvatar, thus, our advisors are encouraging us to publish our work! Thank you DREAM for creating this opportunity!
All video data is courtesy of the following: Carol Neidle, Ashwin Thangali, and Stan Sclaroff  Challenges in the Development of the American Sign Language Lexicon Video Dataset (ASLLVD) Corpus, Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, LREC 2012, Istanbul, Turkey. http://www.bu.edu/linguistics/UG/LREC2012/LREC-asllvd-final.pdf