This allows the wearer to hear the translation read out loud by way of a connected smartphone.
UCLA scientists have developed a new type of sign language glove that will read out loud in real time. This has the potential to help hearing impaired people to enjoy direct communication with anyone, without the need for someone to translate from signing.
Sensors within this one-hand wearable reads the ASL signing for translation by the app.
This wearable contains a special sensor that spans throughout the length of each of the fingers and the thumb. In this way, it can register the letter, word, or phrase as it is formed in American Sign Language.
The sensor signals from the sign language glove are sent wirelessly from the device to the smartphone. The smartphone then translates the motions into spoken words at a speed of about one word per second. The scientists who developed this device did so with the goal of making it easier for deaf people to be able to communicate.
The sign language glove can bridge the communication gap between those who do and do not speak ASL.
“Our hope is that this opens up an easy way for people who use sign language to communicate directly with non-signers without needing someone else to translate for them,” said the project’s lead researcher, Jun Chen. “In addition, we hope it can help more people learn sign language themselves.”
The UCLA research team’s project results were published in the Nature Electronics journal.
There are an estimated 100,000 to 1 million people in the United States who use American Sign Language to communicate. That said, it’s important to note that there are many different signing languages – over 300 are in use worldwide. For instance, the wearable technology does not yet provide translation for British Sign Language, which is the other primary signing language for the English-speaking world. That version is used by approximately 151,000 UK adults, according to data from the British Deaf Association.
The researchers also went beyond the sign language glove for its tests. For instance, it trialed facial adhesive sensors placed between the wearer’s eyebrows and at the sides of their mouths. This helped the sensors to capture the facial expressions that are a component of American Sign Language.