The FingerTrak device uses four small thermal cameras and machine learning to...
The FingerTrak device uses four small thermal cameras and machine learning to accurately capture the 3D position of the human hand and fingers, which is potentially useful for sign language translation or disease diagnostics.
Source: Courtesy of Cornell University

3D hand-sensing wristband

Researchers from Cornell University and the University of Wisconsin–Madison have designed a wrist-mounted device and developed software that allows continuous tracking of the entire human hand in three dimensions.

The research team views the bracelet, called FingerTrak, as a potential breakthrough in wearable sensing technology with applications in areas such as mobile health, human-robot interaction, sign language translation, and virtual reality.

The device senses and translates into three-dimensional coordinates the many positions of the human hand using three or four miniature, low-resolution thermal cameras that read contours of the wrist. “This was a major discovery by our team – that by looking at your wrist contours, the technology could reconstruct in 3D, with keen accuracy, where your fingers are,” said Cheng Zhang, assistant professor of information science and director of Cornell’s new SciFi Lab, where FingerTrak was developed. “It’s the first system to reconstruct your full hand posture based on the contours of the wrist.”

Yin Li, assistant professor of biostatistics and medical informatics at the UW School of Medicine and Public Health, contributed to the software behind FingerTrak. “Our team had developed a computer vision algorithm using deep learning, which enables the reconstruction of 3D hand from multiple thermal images,” Li said.

Past wrist-mounted cameras have been considered too bulky and obtrusive for everyday use, and most could reconstruct only a few discrete hand gestures. Conventional devices have used cameras to capture finger positions.

The FingerTrak device is a lightweight bracelet, allowing for free movement. It uses a combination of thermal imaging and machine learning to virtually reconstruct the hand. Four miniature, thermal cameras – each about the size of a pea – snap multiple silhouette images to form an outline of the hand.

A deep neural network then stitches these silhouette images together and reconstructs the virtual hand in 3D. Through this method, researchers are able to capture the entire hand pose, even when the hand is holding an object.

Zhang said the most promising application is in sign language translation. “Current sign language translation technology requires the user to either wear a glove or have a camera in the environment, both of which are cumbersome,” he said. “This could really push the current technology into new areas.”

Li suggests that the device could also be of use for health care applications, specifically in monitoring disorders that affect fine-motor skills. “How we move our hands and fingers often tells about our health condition,” Li said. "A device like this might be used to better understand how the elderly use their hands in daily life, helping to detect early signs of diseases like Parkinson’s and Alzheimer’s.”

The researchers published their work in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Subscribe to our newsletter

Related articles

AI in healthcare – hype, hope and reality

AI in healthcare – hype, hope and reality

Currently, we are too focused on the topic of AI. In order, however, to leverage AI technology several challenges have to be mastered and a proper framework has to be established.

Neural network for elderly care could save millions

Neural network for elderly care could save millions

A deep neural network model helps predict healthcare visits by elderly people, with the potential to save millions.

Mental health game changer

Mental health game changer

Using a simple computer game and AI techniques, researchers were able to identify behavioural patterns in subjects with depression and bipolar disorder.

Augmented Reality in the OR: matching man and machine

Augmented Reality in the OR: matching man and machine

One of the crucial future technologies in surgery is Augmented Reality. Most experts agree that AR will increase safety and efficiency, improve surgical training and decrease costs.

Catching Z’s, capturing data: DIY wearable

Catching Z’s, capturing data: DIY wearable

Researchers are creating a wearable electronics device that can read brain waves while allowing the wearer to easily drift off into the various stages of sleep.

COVID-19: Deep learning-based cough recognition

COVID-19: Deep learning-based cough recognition

Researchers announced that their coughing detection camera recognizes where coughing happens, visualizing the locations.

Machine learning model may perfect 3D nanoprinting

Machine learning model may perfect 3D nanoprinting

Scientists and collaborators are using machine learning to address two key barriers to industrialization of two-photon lithography.

Neural network helps doctors explain relapses of heart failure

Neural network helps doctors explain relapses of heart failure

Researchers have developed an algorithm that not only predicts hospital readmissions of heart failure patients, but also tells you why these occur.

Wearable haptics make virtual objects feel real

Wearable haptics make virtual objects feel real

Researchers have created soft actuators that can simulate the feeling of touching a virtual object with your fingers.

Popular articles