When technology companies talk about “friction,” they’re often referring to the steps required to get thoughts and commands from your head into the computer through an interface. Right now, you’re probably tapping your phone’s touchscreen, or using your computer’s mouse and keyboard, to interface with the web. Big tech is always looking for ways to streamline that process and this week, Facebook gave us a look into its long-term plans to shorten the distance between your brain and a computer.
Facebook established its Reality Lab six years ago (it was originally called Oculus Research) in order to try to imagine the future of what human-computer relationships will look like. Last year, Facebook offered a look at its idea for smart glasses that would interpret the wearer’s surroundings in order to cancel out background noise and amplify desirable sounds in real-time; it would do that using built-in microphones and AI to make it easier for people to talk. Now, the company has expanded that preview to include its vision for wrist-based interactions that go far beyond a regular smartwatch with a touch screen.
Wrist-based computing
The early prototype takes a somewhat familiar form. The squarish computing unit is about the size of a small stack of Wheat Thin crackers and it’s attached to the wrist via a chunky band. According to Facebook, the wrist provides the optimal opportunity to install a computer interface on a human. That’s partly because the brain has many neurons dedicated to controlling the hands and wrists. Through a process called electromyography, sensors in a wearable device can analyze electrical motor nerve signals and translate them into commands for electronic devices.
Facebook demonstrates this idea by allowing users to pinch their fingers together in order to give a simple command like skipping a track playing on your device. The machinery can be extremely sensitive and measure movements as small as a single millimeter. But, Facebook says it could eventually interpret those signals, even if they don’t result in physical movements. Users wouldn’t even have to move their fingers at all.
AI to predict what you want
The other piece required to remove friction comes from the computer’s ability to anticipate what a person will need and offer it up to them before they request it. The Facebook blog post poses the following question: “What if, rather than clicking through menus to do the thing you’d like to do, the system offered that thing to you and you could confirm it with just a simple “click” gesture?” Facebook has already done massive amounts of work when it comes to learning its user’s preferences. Here, however, it would constantly learn about your habits and preferences in order to try to anticipate events in your daily life instead of which ads you might click on.”
For this to work, the computer would need to learn about the wearer, but also have information about the surrounding environment to present them with options relevant to their current situation. That’s where smart glasses with built-in cameras and audio sensors could help. In theory, the devices could also pull from sources like location data, or even your own biological data, to try to get ahead of what you might want the devices to do. So, if it sees you’re at your local coffee shop, it might ask if you want it to order your regular drink. Then, you could either confirm or deny using a small hand gesture or possibly a voice command.
[Related: Google and Levi’s built a new gesture-sensing smart jacket]
It could also learn to interpret your interactions with other objects. For instance, if it recognizes that you’re putting on your running shoes, it could offer to track your impending workout. While that’s possible with computer vision, the possibilities multiply if the computer can actually pull information from other connected devices.
Feeling the feedback
While input is important, Facebook is also working on refining the feedback process using haptics. You’ll find haptics in devices like your phone or the new Sony PlayStation 5 DualSense controller. It uses localized vibrations to simulate different sensations.
In some ways, Facebook’s use of haptics feels familiar. For instance, the wearable could have specific buzzing patterns for each person who frequently calls you so you can know who’s on the line without having to even look at your phone. Facebook is also working on “haptic emojis,” which associate vibration patterns with popular emojis.
Beyond the simple patterns, however, the company is also working on using haptic sensations to approximate more complex experiences. So, if you’re pulling back the string of a bow in VR, the device could use vibrations to approximate the actual sensation you’d feel in that situation.
Facebook has worked with some other prototypes as well that go beyond regular vibration. A prototype called Tasbi uses actuators to squeeze a wearer’s wrist in addition to the vibrations, which adds another level of flexibility.
How much do we want Facebook in our brains?
Many of these features only work if Facebook has access to tons of data about users and how their minds work. It’s certainly understandable to feel trepidation about a big tech company with a spotted security record monitoring your every action.
Facebook says it encourages developers working on its projects to publish their work so other peers can evaluate it. The lab also maintains a larger goal to evaluate the neuroethical questions that come from technology like this. That will be an increasingly important part of the conversation, especially as we encounter more of this type of integrated technology.
For now, Facebook is still a long way off from implementing any of this in the real world. But, the lab is out there doing the work. Between projects like this and Elon Musk’s Neuralink system, the future could involve a lot less typing and a lot more computer mind control.