logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Meta Presented AI Model That Can Decode Speech from Brain Activity

The model decodes speech segments with up to 73% top-10 accuracy from a vocabulary of 793 words.

Meta has been working on the practical application of its AI tools in different fields for quite some time, with its MyoSuite being an example of the benefits granted with AI in medicine. This time, the company presented a new AI model that can help people with medical conditions. The model can decode speech from noninvasive recordings of brain activity.

Meta claims that from three seconds of brain activity, the model can decode the corresponding speech segments with up to 73% top-10 accuracy from a vocabulary of 793 words.

The deep learning model is trained with contrastive learning and then used to align noninvasive brain recordings and speech sounds. To do this, the researchers use wave2vec 2.0, an open-source learning model which helps identify the complex representations of speech in the brains of volunteers listening to audiobooks.

"We focused on two noninvasive technologies: electroencephalography and magnetoencephalography (EEG and MEG, for short), which measure the fluctuations of electric and magnetic fields elicited by neuronal activity, respectively. In practice, both systems can take approximately 1,000 snapshots of macroscopic brain activity every second, using hundreds of sensors."

The researchers leveraged four open-source EEG and MEG datasets, capitalizing on more than 150 hours of recordings of 169 healthy volunteers listening to audiobooks and isolated sentences in English and Dutch. Then, they input those EEG and MEG recordings into a “brain” model, which consists of a standard deep convolutional network with residual connections. 

The architecture learns to align the output of this brain model to the deep representations of the speech sounds that were presented to the participants. After training, the system performs zero-shot classification: "given a snippet of brain activity, it can determine from a large pool of new audio clips which one the person actually heard. From there, the algorithm infers the words the person has most likely heard."

The next step is to see if the team can extend this model to directly decode speech from brain activity without needing the pool of audio clips.

The paper concludes that this approach benefits from the pulling of large amounts of heterogeneous data, and could help improve the decoding of small datasets. The algorithms could be pretrained on large datasets inclusive of many individuals and conditions, and then support the decoding of brain activity for a new patient with little data.

Read the research here and don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

20 Ropes Curve Brushes all contained within one Multi-mesh brush and Low poly meshes.

Join discussion

Comments 0

    You might also like

    A Week After "Basically Announcing" Minecraft 2, Notch Basically Cancels It

    Instead, he and his team will focus on the previously-announced retro-style roguelike.

    Rumor: Possible Release Date for Grand Theft Auto 6 Revealed

    A video game store from Uruguay appears to have disclosed the launch date for the gaming industry's most anticipated title.

    Breaking: Unity Suddenly Lays Off Numerous Developers With a 5 AM Email

    Apparently, the entire Unity Behavior team was cut, alongside many other employees.
    • Hospital Props
      by Dekogon

      This project includes assets, maps, materials, Blueprints, and effects created in the Unreal Engine. Each asset was created for realistic AAA quality visuals, style, and budget.

    • Concrete Floor
      by Stan Brown

      100% procedural Material made in Substance Designer

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more

    ×