A Healthy Future for Machine Learning
Artificial Intelligence and machine learning can be used to advance healthcare data analytics for identification and treatment. The compute capability will rely heavily on processor performance.
There are more demands on healthcare services than ever before, and as our ability to identify and treat illness grows, these demands will continue to increase. The healthcare industry faces the same conundrum as other areas of business and industry—how to do more in less time. For medical cases, there is also the element of increased accuracy, to identify and even anticipate events that can affect the patient.
Healthcare providers are expected to make significant investments in real-time, big data analytics applications…
Intel® CEO Brian Krzanich believes that Artificial Intelligence (AI) will be “transformative.” He announced at this month’s WSJ D.Live technology conference that Intel will soon ship the industry’s first silicon neural network processor, the Intel® Nervana™ Neural Network Processor (NNP). Its architecture is purpose built for deep learning with no standard cache hierarchy and with on-chip memory that is managed directly by software to handle the large amounts of compute performance and accelerate the training time for deep learning.
Deep learning is similar to machine learning, where algorithms analyze data to find models that can predict outcomes and which learns as more data becomes available. Deep learning, unlike machine learning, can learn on its own, without supervision.
The Nervana NNP’s on- and off-chip interconnects are designed for large, bi-directional data transfer. Another feature is the model parallelism where neural network parameters are distributed across multiple chips, allowing them to act as a single, virtual chip.
Parallel processing is increased by Intel’s numeric format, Flexpoint. This allows scalar computations to be implemented as fixed point multiplications and additions. Using a shared exponent increases the dynamic range, says Intel, while decreasing power per computation.
The processing of data at increased rates in healthcare means that images or conditions can be processed rapidly to identify patterns or conditions that may be a concern. The obvious use would be in scanning data in real-time for earlier diagnosis and greater accuracy to advance research in cancer, Parkinson’s disease, dementia, and other brain disorders.
Intel plans multiple generations of the Nervana NNP and has declared that it is on track to exceed its goal of achieving 100 times greater AI performance by 2020.
A Graphics Boost
At the Graphic Processing Unit (GPU) Technology Conference (GTC) in Munich this month, NVIDIA’s founder and CEO, Jensen Huang, described AI as software that writes itself and which learns from the digital experience.
In his keynote, he referenced this year’s Nobel Prize for Chemistry, awarded to Jacques Dubochet, Joachim Frank, and Richard Henderson, for their work on cryogenic electron microscopy (Cryo-EM) which captures large amounts of high-resolution images to reconstruct 3D macromolecular structures. Freezing the molecules mid-movement and displaying them at atomic resolution allows researcher to view biological processes that have not been visible before. This insight allows scientists to explore the architecture of proteins that cause antibiotic resistance and has been used to produce a 3D structure of an enzyme linked to Alzheimer’s, for example.
The international team of chemists used the GPU-accelerated REgularized LIkelihood OptimizatioN (RELION), open source software program to accelerate the image processing and reconstructing of the 3D images.
NVIDIA’s V100 data center GPU, for example, coupled with the company’s TensorRT 3 AI inference software, can process 5,700 images per second instead of the 140 images per second achieved with a Central Processing Unit (CPU). Other benefits, Huang continued, are low latency, a reduction in power, and reduction in size. One TensorRT GPU can replace 100 servers, he told the audience.
Augmentation Not Replacement
The healthcare analytics market could be worth as much a $24.55 billion by 2021, according to Research and Markets’ Healthcare report, partly driven by AI, which processes electronic health records data for insights into treatment and care.
Healthcare providers are expected to make significant investments in real-time, big data analytics applications, to harness not just electronic health records but also, using machine learning, diagnose quickly and accurately, anticipate prescription needs, and identify conditions and changes in conditions.
At GTC Europe, Matt Hawkins, Enterprise Data Architect at Kinetica, explained how the company can run 10 machines powered by a single GPU to process x-ray images that are stored in a database.
The company’s GPU architecture typically results in hardware costs that are less than 10 percent of other in-memory databases. Additionally, the database can be scaled out across multiple nodes as datasets grow.
As a GPU performs parallel processing, it is faster than a CPU which processes data sequentially, reducing processing bottlenecks and the reliance on indexes. The company uses NVIDIA GPUs, pointing out that some have more than 4,000 cores, compared with 16 or 32 cores in a typical CPU-based device. There are also efficiency gains, says Hawkins, with as little as one-tenth the hardware and one-twentieth the power required compared with other in-memory analytics solutions.
GPUs tackle one of the main challenges of deep learning frameworks—the need for vast amounts of parallel processing to train the models. GPUs can efficiently execute programmer-coded commands and handle the parallel training of deep neural networks for the AI database.
Another challenge is that unlike traditional machine learning, AI models retrain using new data, which must be originated in real time. Kinetica has a user-defined functions framework that can run custom code directly on the data within the database. This code can use the GPU’s parallel computing performance to compute across several machines. The user-defined framework can range from the simple user-defined function framework regressions to complex, deep neural networks.
As there is less data to index, it is available immediately after it is written, so it does not have to be rewritten before being returned to queries. In tests, the company returned results for advanced analytical queries on billions of rows of data in less than one second.
In its latest release, an OpenGL Framework architecture for the rendering engine has accelerated polygon and line geometries in particular by up to a factor of 100, reports the company.
Another company using the power of GPU for processing images is London-based medical imaging company, Kheiron. It uses NVIDIA GPU driven machine learning for detection software for radiology. The software is designed to augment the work of radiologists in mammography reading and reporting.
The software uses pre-processing, machine learning, deep learning, and Convolutional Neural Networks (CNN)—a framework of neural networks particularly suited to analyzing images—and highlights areas of interest for a radiologist to examine in more detail.
Co-founder and CEO, Tobias Rijken, stresses that this next generation detection software is designed to supplement, not replace radiologists. “Already two radiologists will view images,” he says, “this scanning software means fewer false positives, fewer missed cancers, and reliable early cancer detection, with improved outcomes.”
The software is due to undergo regulatory approvals at the end of this year for deployment in 2018. It will mean, says Rijken, improved productivity for hospitals, using affordable software and unbiased quality control in a time when the level of screening is increasing while the number of radiologists shrinks. Around 15 percent of radiologists in the UK will reach retirement age in the next five years at a time when 15 percent of radiologist posts in the UK remain unfilled.
Deepu Talla, Vice President and General Manager, Tegra, at NVIDIA, echoes the sentiment of augmentation not replacement. “AI is not a new market, but a new tool that can be used in every industry,” he says. He likens it to electricity, used in all markets to bring new functionality. There is some replacement and displacement, he conceded, but not applications that will replace, only augment, people. Robotics can provide help for the elderly, and AI will mean that the interaction is positive and helpful. He points to the future when so-called cobots will figure out the emotions of a person and respond accordingly.
In the nearer term, he points out that laser eye surgery is already conducted by robots and foresees robots assisting—not replacing—surgeons performing repeated, precision incisions.
Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.