New technology gives clinicians inside view of patients’ joints in motion
Discovery could lead to new generation of 3-D imaging tools for physiotherapy and surgery.
By KATIE WILLIS
New technology from University of Alberta computing scientists is enabling clinicians to get a better, fuller picture of patients’ joints in motion.
“What we’re trying to do is create a complete dynamic model of the entire knee joint in motion, including bone, muscle and cartilage,” explained Pierre Boulanger, professor in the Department of Computing Science and Cisco Chair in Healthcare Solutions.
“We’re trying to use advanced neural network techniques to segment these knee structures using standard MRI and then track them using a novel real-time open MRI machine to capture the motion.”
The technique uses a deep learning algorithm to segment the knee anatomy automatically from examples provided by clinicians.
It allows them to see the unique, dynamic movements of individual bones inside a patient’s knee. And because it’s based on MRI imaging, it reduces the potentially harmful effect of X-ray radiation.
While the researchers used the knee joint as a case study, they said the technology can be applied to any joint or any motion in the body.
Physiotherapists and orthopedic surgeons can also use the technology to understand how their work affects the body over time. This is especially useful when a patient has a problem that is dynamic, which is very hard to diagnose.
“This is the beginning of a new generation of imaging tools that can create personalized models of individual patients’ joints,” added Boulanger.
“Ultimately, our goal is to develop patient-specific modelling, technology and tools for use in a clinical setting,” explained Boulanger, who worked on the study with Constance Lebrun and Fateme Esfandiarpour, U of A family physicians from the Glen Sather Sports Medicine Clinic.
The study, “A Structured Deep-Learning Based Approach for the Automated Segmentation of Human Leg Muscle from 3D MRI,” was presented at the 2017 Conference on Computer and Robot Vision.