logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

AI Model That Turns Sign Language Into English In Real-Time

The AI model created by Engineering student Priyanjali Gupta can translate American Sign Language into English through a webcam.

Engineering student Priyanjali Gupta created an AI model that can turn American Sign Language (ASL) into English in real-time. The model uses Tensorflow object detection API and is built using transfer learning from a pre-trained ssd_mobilenet model.

"The dataset is made manually by running the Image Collection python file that collects images from your webcam for all the mentioned below signs in the American Sign Language: Hello, I Love You, Thank you, Please, Yes, and No."

As Gupta told Interesting Engineering, the motivation to do something with her knowledge and skillset came from her mom, who asked her "to do something now that she's studying engineering."

Gupta says she was inspired by Nicholas Renotte's video on real-time Sign Language detection:

"The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames for which I'm likely to use LSTM. I'm currently researching on it," Gupta says.

Creating something like this is certainly not easy, but Gupta is hopeful:

"I'm just an amateur student but I'm learning. And I believe, sooner or later, our open source community, which is much more experienced than me, will find a solution."

See the model's video presentation on Gupta's LinkedIn and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more