Eyes are faster than hands: Vision-based machine learning for soft wearable...
Eyes are faster than hands: Vision-based machine learning for soft wearable robot enables disabled person to naturally grasp objects.
Source: Seoul National University

Wearable robot enables disabled person to grasp objects

Professor Sungho Jo (KAIST) and Kyu-Jin Cho (Seoul National University), a collaboration research team in Soft Robotics Research Center (SRRC), Seoul, Korea, have proposed a new intention detection paradigm for soft wearable hand robots. The proposed paradigm predicts grasping/releasing intentions based on user behaviors, enabling the spinal cord injury (SCI) patients with lost hand mobility to pick-and-place objects.

They developed a method based on a machine learning algorithm that predicts user intentions for wearable hand robots by utilizing a first-person-view camera. Their development is based on the hypothesis: user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.

The machine learning model used in this study, Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net), is designed based on this hypothesis. VIDEO-Net is composed of spatial and temporal sub-networks, where the temporal sub-network is to recognize user arm behaviors and the spatial sub-network is to recognize hand-object interactions.

An SCI patient wearing Exo-Glove Poly II, a soft wearable hand robot, successfully pick-and-place various objects and perform essential activities of daily living, such as drinking coffee without any additional helps. Their development is advantageous in that it detects user intentions without requiring any person-to-person calibrations and additional actions. This enables the wearable hand robot to interact with humans seamlessly.

How does this system work?
This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses. (Something like Google Glass can be used in the future). VIDEO-Net, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. With Vision, the environment and the human movement data is captured, which is used to train the machine learning algorithm.

Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out the intention of the user. Whether the person is trying to grasp or not. This works because the target users are able to move their arm, but not their hands. We can predict the user’s intention of grasping by observing the arm movement and the distance from the object and the hand, and interpreting the observation using machine learning.

Who can benefit from this technology?
As mentioned earlier, this technology detects user intentions from human arm behaviors and hand-object interactions. This technology can be used by any people with lost hand mobility, such as spinal cord injury, stroke, cerebral palsy or any other injuries, as long as they can move their arm voluntarily. This concept of using vision to estimate the human behavior

What are the limitations and future works?
Most of the limitations come from the drawbacks of using a monocular camera. For example, if a target object is occluded by another object, the performance of this technology decreases. Also, if user hand gesture is not able to be seen in the camera scene, this technology is not usable. In order to overcome the lack of generality due to these, the algorithm needs to be improved by incorporating other sensor information or other existing intention detection methods, such as using an electromyography sensor or tracking eye gaze.

To use this technology in daily life, what do you need?
In order for this technology to be used in daily life, these devices are needed: a wearable hand robot with an actuation module, a computing device, and glasses with a camera mounted. We aim to decrease the size and weight of the computing device so that the robot can be portable to be used in daily life. So far, we could find compact computing device that fulfills our requirements, but we expect that neuromorphic chips that are able to perform deep learning computations will be commercially available.

Subscribe to our newsletter

Related articles

A smart orthosis for a stronger back

A smart orthosis for a stronger back

Researchers developed ErgoJack to relieve back strain and encourage workers to execute strenuous movements in a more ergonomic way

AI system tracks tremors in Parkinson’s patients

AI system tracks tremors in Parkinson’s patients

Researchers have developed machine learning algorithms that, combined with wearable sensors, can continuously track tremor severity in Parkinson's patients.

Robotic hand merges amputee and robotic Control

Robotic hand merges amputee and robotic Control

Scientists have successfully tested neuroprosthetic technology that combines robotic control with users’ voluntary control, opening avenues in the new interdisciplinary field of shared control for neuroprosthetic technologies.

Robotic catheter navigates autonomous inside body

Robotic catheter navigates autonomous inside body

The robotic catheter, using a novel sensor informed by AI and image processing, makes its own way to a leaky heart valve.

Necklace detects abnormal heart rhythm

Necklace detects abnormal heart rhythm

A necklace which detects abnormal heart rhythm will be showcased for the first time on EHRA Essentials 4 You, a scientific platform of the European Society of Cardiology (ESC).

Wearable tracks COVID-19 key symptoms

Wearable tracks COVID-19 key symptoms

Researchers have developed a wearable device to catch early signs and symptoms associated with COVID-19 and to monitor patients as the illness progresses.

Smart insoles unlock the secrets of your sole

Smart insoles unlock the secrets of your sole

Researchers at Stevens Institute of Technology have developed an AI-powered, smart insole that instantly turns any shoe into a portable gait-analysis laboratory.

Fighting hand tremors with AI and robots

Fighting hand tremors with AI and robots

Researchers have tapped AI techniques to build an algorithmic model that will make the robots more accurate, faster, and safer when battling hand tremors.

AI challenge aims to improve mammography accuracy

AI challenge aims to improve mammography accuracy

AI techniques, used in combination with the evaluation of expert radiologists, improve the accuracy in detecting cancer using mammograms.

Popular articles