Patient records, biological images, medical journal articles, experimental results, treatment outcomes, physician notes for individual cases: all these represent a treasure trove of current and historical information that, when properly analyzed, can provide a foundation for medical research that may lead to a multitude of advancements in healthcare in coming years. Yet the very volume of such information makes digging hidden correlations and patterns out of the data nearly impossible for human minds alone.
That’s why deep learning, with its ability to detect and make use of connections in huge datasets that might otherwise remain unrecognized, is becoming an indispensable tool in medical research.
In this article we’ll take a brief look at some specific examples of what’s happening on the front lines of academic research into the application of deep learning to healthcare.
Treatment of Cancer
For decades one of the hottest areas of research in the healthcare field has been in discovering ways of reliably diagnosing cancer. The focus of much of this effort is on the discovery of genetic markers that can help identify genes commonly involved with the disease.
With the advent of deep learning, research in this area is beginning to yield increasingly positive results. An example is a project carried out at Oregon State University that aims at using deep neural networks to differentiate between healthy genetic samples and those in which breast cancer is involved. This requires the ability to recognize subtle patterns in the genetic data of patients.
One of the biggest obstacles in being able to identify genetic samples that contain indications of cancer is the high dimensionality (the dimensionality of a dataset refers to the number of attributes it has) and large amount of noise present in the genetic data. The researchers, all affiliated with Oregon State’s School of Electrical Engineering and Computer Science, aimed at finding a means ”to transform high-dimensional, noisy gene expression data to a lower dimensional, meaningful representation.”
In a paper entitled ”A Deep Learning Approach for Cancer Detection and Relevant Gene Identification” the research team reports on their success in making use of a Stacked Denoising Autoencoder (SDAE) to detect genetic markers for cancer. Autoencoders are a type of feedforward neural network in which the input is compressed into a code of lower dimensionality. The aim is to be able to closely approximate the input data by reconstructing the code. This encode/decode/re-encode process is inherently denoising in its effect.
The researchers report that they have made significant progress in differentiating breast cancer samples from healthy control samples. By analyzing lower-dimension coded data and mapping it back to the original representation, they were able to ”discover highly relevant genes that could play critical roles and serve as clinical biomarkers for cancer diagnosis.”
Further research will aim at identifying the biomarkers related to specific types of cancers.
Rapid Analysis of Medical Images
The ability to overlay and align two medical images is crucial for performing in-depth analyses of anatomical differences or changes. For example, charting the progress of a patient’s brain tumor might involve comparing MRI scans taken at different times in order to analyze changes. With traditional methods, however, performing the required registration between images may take hours because of the necessity of aligning the millions of individual pixels that may be present in MRI images. A faster way of accomplishing this registration would open up exciting clinical possibilities, perhaps even making real-time interventions possible.
That’s the goal of a research project carried out at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Traditionally, the sheer volume of data that must be aligned makes the process very computationally complex, and therefore quite time-consuming. An MRI scan consists of hundreds of stacked 2-D images that constitute a huge 3-D ”volume” comprised of a million or more 3-D pixels known as ”voxels.” The amount of computation time required to align all the voxels in one volume with those in another can be quite large, especially considering that the MRI images may come from different machines, have different spatial orientations, or even represent two different brains.
The problem with current alignment methods is that they don’t learn from previous registration runs. The innovation explored in this research effort, an algorithm called ”VoxelMorph,” uses a convolutional neural network (CNN) along with a specially adapted computation layer known as a spatial transformer. During training, which is carried out using 7,000 publicly available MRI brain scans, the algorithm learns the similarities between voxels in each pair of scans. This gives the system information about particular groups of voxels, such as anatomical shapes that exist in both scans of a pair. That information is used to calculate and record generalized parameters that can be optimized to apply to any pair of scans.
By using the optimized parameters as inputs to a simple mathematical function, the algorithm can quickly calculate the alignment of every voxel in any pair of scans. In their research paper, entitled ”An Unsupervised Learning Model for Deformable Medical Image Registration,” the team reports that when used with 250 test scans, registrations that once took hours were accomplished in minutes, and in less than a second when graphics processing units (GPUs) were used in place of traditional CPUs.
The team anticipates that an area that will be particularly productive for future research is the application of the algorithm in near real time during surgery. For example, when performing brain tumor resections, scans are often taken before and after surgery to insure that all of the tumor was removed. But because registering and comparing those scans currently takes two or more hours, going back to remove any portion of the tumor that was missed might require additional surgery. If, however, the comparison could be done in seconds, as is the potential with the VoxelMorph approach, any parts of the tumor that were initially missed could be removed during the surgery.
This approach is not restricted to brain scans. Research is now being done on extending it to other applications, including imaging of the lungs.
Improving ICU Patient Care
The data physicians must contend with when analyzing patient records is often voluminous, poorly organized, and inconsistently documented. That, of course, can have a decidedly negative effect on the standard of patient care that is provided. A research team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is exploring ways of using deep learning to make individuals’ medical records more useful to medical personnel, especially in high stress environments such as the ICU.
The approach they developed is called ”ICU Intervene.” It aims at using deep learning to analyze patient information, such as physician notes, vital signs, lab results, and patient demographic data, along with past ICU case information, to suggest appropriate treatments in real time.
In their paper entitled ”Clinical Intervention Prediction and Understanding using Deep Networks” the team details how they used long short-term memory networks (LSTMs) and convolutional neural networks (CNNs) to predict in real time which of five medical interventions would be appropriate in the circumstances. The system, which reanalyzes the clinical data every hour, can even make predictions into the future. For example, it might indicate that a patient will need a ventilator in six hours rather than in 30 minutes or an hour.
A major goal of the research is not just to be able to accurately suggest the most appropriate treatments at a particular time, but also to reveal to physicians the reasoning behind the system’s recommendations.
The researchers report that their models significantly outperform baselines in predicting the start and end of needed medical interventions.
Future research will aim at providing more individualized care recommendations, and offering more in-depth explanations of the reasoning behind the system’s decisions.
Protein Crystallization for Drug Development
Understanding the structure of a particular protein is often a critical step in the development of new drugs. X-ray crystallography, the technique used in examining molecular structure, can’t measure the diffraction from a single protein molecule. But if a protein crystal can be formed, containing perhaps billions of identical molecules, X-ray crystallography can be effective in revealing the molecular structure of the sample.
But crystallizing protein is extremely difficult. The traditional process amounts to little more than painstaking trial and error using hundreds of different liquid solutions. As Patrick Charbonneau, Professor of Chemistry and Physics at Duke puts it, ”What allows an object like a protein to self-assemble into something like a crystal is a bit like magic.”
Looking for a way to speed up the crystal formation process, Dr. Charbonneau decided to examine the possibility that recent advances in deep learning might provide a solution. He helped assemble a team of researchers from both academia and industry, including Vincent Vanhoucke of Google Brain, to see if they could develop an AI model to recognize hidden patterns in crystallization images that indicate a particular protein solution is capable of crystallizing.
The team collected almost a half million images from protein crystallization experiments, along with human evaluations of the samples, into a database called MARCO (Machine Recognition of Crystallization Outcomes). These images served as the training set for a deep convolutional neural network, or CNN.
According to the team’s research report, entitled ”Classification of Crystallization Outcomes Using Deep Convolutional Neural Networks,” the effort was a resounding success. The algorithm achieved an accuracy level of 94.7 percent which, as the report notes, ”is even above what was once thought possible for human categorization.”