logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

Paralyzed Woman Got Her Voice Back Thanks to Brain Implant & AI

The tech can decode 78 words per minute.

People's relationship with AI is rocky nowadays, but you can't deny its usefulness when you look at some advancements it achieved.

Meet Ann. When she was 30, she suffered a stroke that left her paralyzed. She couldn't move or talk, but now there is a chance for her to regain at least part of her speech ability. She is helping researchers from UC San Francisco and UC Berkeley develop new brain-computer technology that can help people communicate via a digital avatar.

The tech takes data from brain signals and transforms it into words that the avatar says. The researchers trained deep-learning models using neural data collected as the participant attempted to silently speak sentences. As a result, their model produces around 78 words a minute with a median word error rate of 25% – a nice improvement over the 14 words per minute other technologies demonstrate. 

"For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant’s pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis," state the researchers.

Image credit: Noah Berger

To put it simply, the team came up with an algorithm to synthesize Ann's speech so that it sounded like her voice before the injury by using a recording of her speaking at her wedding. An admirable work, it also touched Ann, who says it's like "hearing an old friend."

As for the avatar, it is also connected to Ann's brain and can express her emotions.

"The team animated Ann’s avatar with the help of software that simulates and animates muscle movements of the face, developed by Speech Graphics, a company that makes AI-driven facial animation," it says on the UC San Francisco website. "The researchers created customized machine-learning processes that allowed the company’s software to mesh with signals being sent from Ann’s brain as she was trying to speak and convert them into the movements on her avatar’s face, making the jaw open and close, the lips protrude and purse and the tongue go up and down, as well as the facial movements for happiness, sadness and surprise."

There is also another research on the same topic, which might be interesting for you. It demonstrates a speech-to-text brain-computer interface that records spiking activity from intracortical microelectrode arrays and achieves a 9.1% word error rate on a 50-word vocabulary. Both models approach the speed of natural conversation, which is 160 words per minute, and this is truly amazing for science and people who can't speak, opening fantastic possibilities.

Read more about Ann's story here and join our 80 Level Talent platform and our Telegram channel, follow us on ThreadsInstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Leave Comment
Ready to grow your game’s revenue?
Talk to us

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more