04.06.2018 •

Creating a piece of mind

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group.

Photo
High-throughput tissue filtering, a major feature of the approach developed by the authors of the study, can help quickly remove extraneous tissues to reveal the desired underlying structures (right) without sacrificing the resolution or intensity gradients present in the native imaging data (left and center).
Source: James Weaver and Steven Keating/Wyss Institute at Harvard University

Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI and CT scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration — which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany — is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

Photo
A 3D printed foot model (left) and its cross section (right) clearly reveal the intricate internal architecture of the different bone types, as well as the surrounding soft tissue.
Source: James Weaver and Steven Keating/Wyss Institute at Harvard University

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional — we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

Subscribe to our newsletter

Related articles

Federated learning allows hospitals to share data privately

Federated learning allows hospitals to share data privately

Researchers have shown that federated learning is successful in the context of brain imaging, by being able to analyze MRI scans of brain tumor patients and distinguish healthy brain tissue from cancerous regions.

AI used to identify different types of brain injuries

AI used to identify different types of brain injuries

Researchers have developed an AI algorithm that can detect and identify different types of brain injuries.

AI for very young brains

AI for very young brains

Scientists have developed an innovative new technique that uses artificial intelligence to better define the different sections of the brain in newborns during a magnetic resonance imaging (MRI) exam.

Imaging technique to study 3D printed brain tumors

Imaging technique to study 3D printed brain tumors

Researchers demonstrated a methodology that combines the bioprinting and imaging of glioblastoma cells in a way that more closely models what happens inside the human body.

AI & MRI look into the genome of brain tumors

AI & MRI look into the genome of brain tumors

Researcher have developed a computer method that uses MRI and machine learning to rapidly forecast genetic mutations in glioma tumors,

Deep learning boosts MRI detection of ADHD

Deep learning boosts MRI detection of ADHD

Deep learning can boost the power of MRI in predicting attention deficit hyperactivity disorder (ADHD).

Facial recognition software IDs individuals from MRI scans

Facial recognition software IDs individuals from MRI scans

Though identifying data typically are removed from medical image files before they are shared for research, a study finds that this may not be enough to protect patient privacy.

AI rivals radiologists at detecting brain hemorrhages

AI rivals radiologists at detecting brain hemorrhages

An algorithm did better than experts radiologists at finding tiny brain hemorrhages in head scans — an advance that one day may help doctors treat patients with strokes.

‘Uncanny Valley’: Brain network evaluates robot likeability

‘Uncanny Valley’: Brain network evaluates robot likeability

Scientists have identified mechanisms in the human brain that could help explain the the unsettling feeling we get from robots and virtual agents that are too human-like.

Popular articles