The NextMind team has talked about their neural tool that can make you feel like a Jedi and discussed the future of neural technologies.
NextMind started as a research lab led by Sid Kouider. We want everyone to experience using a Brain-Computer Interface as we think that in the future, having a direct connection with machines will help humankind level up and will become a normal part of everyday life. That’s why we made the switch from lab to company, to bring this technology to more people starting with the dev community.
We designed a best-in-class noninvasive EEG hardware that is easy to wear, discreet, and offers the best measurement of neural activity on the market. Our machine learning algorithms analyze the collected brain data to recognize when you’re actively focusing on a mind-interactable object (an object with a NeuroTag pattern overlay). By combining neuroscience with artificial intelligence we are to decode visual focus in real-time. To allow users to feel that experience we are shipping a Dev Kit that includes the hardware, the algorithms, and also a Software Development Kit that can be used by developers to create their own mind-enabled experiences.
Our BCI is a new way to interact with the world. You could think of it as an evolution of the mouse or touchscreen. In that sense, the applications are limitless. A lot of people compare it to feeling like a Jedi or having superhuman powers.
You can play games without any controllers like in this demo. Or include mind-interactions as an additional gaming input like in this demo. This article on using mind interactions in your game dev could be interesting for you too.
Beyond gaming, it can be used with smart objects (IoT) around the home, for accessibility applications like typing…basically to control any digital interface.
How Does It Work
Right now our SDK uses the Unity game engine. We wanted to make it easy to use for a large number of developers. We also love how Unity makes it easy to create very visual and engaging content, which corresponds well to our visual BCI.
However, we’re also working on expanding to other languages and platforms so it will be easier for even more people to develop with our tech.
For AR/VR, we are compatible with Oculus Rift, Oculus Quest 1 & 2, HTC Vive and Pro, HoloLens 1 & 2. Using your mind to interact is really useful in these environments which we’ve written about here.
When you first set up the NextMind device, you go through a short calibration process where our machine learning algorithms create a custom model for that user. Our calibration lasts only 40s when other BCIs have a calibration that usually lasts 2 to 5mn. The quality of the model varies from session to session as some people are able to use mind-interaction from their first try while others need more practice to master this new sense of control. In the end, everybody is usually able to perform interactions with their minds as we worked a lot on the UX to be sure that users can feel their brains in action.
Our product is just not recommended for people with epilepsy, but we’ve heard back from people with many types of conditions who are testing our technology. Our tech is especially relevant for people with limited mobility and we are always doing tests with new types of users.
Our goal now is to continue engaging with developers and raise awareness around our BCI technology.
For developers, we want to offer a great experience using our device and SDK. We’ve been rolling out SDK updates almost every month since we started shipping the product in Dec 2020. As well as tutorials and other guidance in our developers’ documentation.
Also on our blog, we will regularly highlight some use cases generated by the community.
In the long term, our goal is to decode visual thought at the speed of thought. We have ongoing R&D, and launching our first product helped us learn a lot. Most people are kind of mind blown when they try our tech, but really for us, it’s just the beginning!