
Robotic system diagnoses diseases through eye movements
A new robotic system developed can help diagnose neurodegenerative diseases, such as dementia and Parkinson, through the analysis of eye movements.
A new robotic system developed can help diagnose neurodegenerative diseases, such as dementia and Parkinson, through the analysis of eye movements.
As part of the “MED²ICIN” lighthouse project, seven Fraunhofer Institutes are presenting the first prototype of a digital patient model.
We present five upper body exoskeletons that might help restore natural hand or limb movements.
Experts at Kessler Foundation led the first pilot randomized controlled trial of robotic-exoskeleton assisted exercise rehabilitation effects on mobility, cognition, and brain connectivity in people with substantial MS-related disability.
Scientists in Dresden are expanding their digital health expertise in multiple sclerosis (MS) therapy and research with an ambitious scientific project - creating a "digital twin“ from data.
Scientists have discovered a new way to analyse microscopic cells, tissues and other transparent specimens, through the improvement of an almost 100-year-old imaging technique.
Two ALS patients, implanted with a brain-computer interface via the jugular vein and without the need for open brain surgery, successfully controlled their personal computer through direct thought.
Researchers have designed a skin-like device that can measure small facial movements in patients who have lost the ability to speak.
The UNC School of Medicine lab of Jason Franz, PhD, created virtual reality experiments to show how a potentially portable and inexpensive test could reduce falls and related injuries in people with multiple sclerosis.
A new line of wearable robotics - a lightweight version of the armor that comic hero Iron Man wears - could keep seniors on their feet longer.
Researchers have developed a system thar helps machine learning models glean training information for diagnosing and treating brain conditions.
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract – an anatomically detailed computer simulation including the lips, jaw, tongue and larynx.