We are searching data for your request:
Upon completion, a link will appear to access the found materials.
A group of researchers at the University of California, Los Angeles have designed a device resembling a glove that can track finger movements in real-time and translate American Sign Language.
The researchers went on with this research to mitigate the technological barriers with a touch of technology. With the help of a machine-learning powered glove-like wearable sign-to-speech translation system, this gap between signers and non-signers can undergo a significant shrinkage.
How the system works
The signer wears a pair of gloves, each of which has five stretchable sensors made of electrically conducting yarns, each sensor runs along the line of a finger. These sensors pick up hand movements and finger positions, thus, interpreting letters, numbers, words, and phrases.
SEE ALSO: 11 COOL WAYS NATURAL PROCESSING LANGUAGE IS USED
Each movement of the hand and the fingers are translated into electrical signals, then they are sent towards the tiny circuit board located on the wrist. This board relays these signals to a smartphone wirelessly where they are turned into voice and text. For now, the invention is able to translate one word/second.
Facial expressions included
American Sign Language also contains certain facial expressions not merely as paralinguistic elements but also serving as essential linguistic markers. So to include these markers into interpretation, the researchers also implemented adhesive sensors during testing, one between the eyebrows and two at the sides of the mouth.
This research is not the first take of humans on sign-to-text translation but as researcher Chen said "Previous wearable systems that offered translation from American Sign Language were limited by bulky and heavy device designs or were uncomfortable to wear"
In contrast, the UCLA team's contraption utilizes inexpensive, lightweight, and long-lasting stretchable polymers.
UCLA filed a request for the patent rights of this tech. Though a commercial model is unlikely yet as it would require a bigger vocabulary and a faster reaction time.