Augmented Reality App Can Translate Sign Language Into Spoken English, and Vice Versa


Students at New York University [Tandon School of Engineering] have created a working prototype of an app that uses machine learning and augmented reality to enable hearing people to understand sign language, and turns spoken words into sign language for the deaf. ... The students created the project as part of Verizon and NYC Media Lab’s third Connected Futures challenge. Zhongheng Li, one the project leads, told me that he was inspired by a friend who has two deaf parents. “Her family moved from Hong Kong to the US, and she explained to me that there’s no universal sign language, so they were having trouble communicating,” Li said at a Demo Day in New York Friday.

 

(see more)