Driving AI at the Edge of Computing
From AI at the edge of IoT to the phenomenal impact of the Intel® Neural Compute Stick, chatting with Intel’s Steen Graham comments on the democratization of AI at the edge.
The Internet of Things (IoT) is progressing, utilizing the power of Artificial Intelligence (AI) at the edge. Intel® Corporation has a significant stake in pioneering the evolution of IoT and AI into a tremendous benefit to humankind. Edge computing provides a lower cost, comprehensive solution by rejecting irrelevant data (or noise). Edge computing also pre-processes and compresses essential data so that increasingly detailed and accurate data can improve AI by feeding new insights back into the system, impacting and improving interactions in the real world.
What is Edge Computing? Why Incorporate AI?
Edge computing must be physically located near the source of the data (shown on the left side of the wireless transmission diagram in Figure 1). Edge computing is engineered to cull all but the most relevant data and compress it down to only what needs to be analyzed and stored (using powerful servers from the cloud service provider). Since the amount of data coming in from the IoT side of things can be huge – e.g., petabytes flowing in hourly or daily – AI can deftly glean the data before transmission. AI at the edge effectively pre-processes data before transporting it over a network to the cloud, avoiding extra charges for additional bandwidth and equipment. Although IoT platforms can be connected to the cloud via a fiber optic backbone, increasingly we see data transmitted wirelessly (as in Figure 1).
In a recent interview with Steen Graham, Intel’s General Manager of IoT Ecosystems and Channels for IoTG, Embedded Systems Engineering discussed some of the initiatives, use cases, and tools that Intel is proffering for a continually evolving IoT landscape, including Artificial Intelligence (AI). Part of the effort is visible in the “Intel® AI: In Production” program, which makes it easier for developers to bring their AI prototypes to market.[i]
Lynnette Reese (LR): IoT is still enabling transformation, and probably will be for some time to come. However, in the last five years, we’ve seen additional improvements in technology with IoT including AI. Can you expand on the advent of AI being an accelerant for IoT deployments, and what industries are we seeing that are driving that transformation?
Steen Graham (SG): Artificial intelligence is a tremendous accelerant to edge computing and the Internet of Things. IDC anticipates that this year 45% of data will be stored, analyzed, and acted upon at the edge. AI enables us to gain insights and act upon this data to enable new services. ABI research anticipates “AI inference that take place on edge devices instead of in the cloud will grow more than sevenfold, from 6 percent in 2017 to 43 percent in 2023.”[ii] We see a number of industries innovating with AI at the edge:
- For the healthcare industry, the combination of AI and IoT is enabling streamlining drug discovery, speeding up genomics processing, and medical imaging analysis that’s faster and more accurate for personalized treatment. Utilizing the OpenVINO toolkit, Philips was able to dramatically accelerate image analysis on patient X-rays for bone-age-prediction modeling and CT scans for lung segmentations.[iii] This resulted in 188x improvement in bone age prediction and 38x in lung segmentation.
- In industrial and manufacturing, IoT and AI are enabling smart manufacturing by using machine vision, connected devices, and real-time insights. Working with Alibaba, an aluminum die casting plant in Chongqing China was able to increase defect detection by 5x.[iv]
- In the retail, IoT and AI are solving critical problems like inventory distortion and enabling new experiences such as cashier-less checkout. Intel teamed with Pensa to enable AI and drones to autonomously scan shelves and alter retailers on inventory levels.[v]
Additionally, we are seeing big innovation in the near future in transportation and smart cities. To learn more, check out the Intel AI: In Production ecosystem.
We are building a community of developers by providing open scalable tools and offerings to expedite their path from prototype to production.
LR: Why is edge computing a significant transition?
SG: The need for edge computing is driven by computing becoming woven into the operational fabric of how we live and work. This drives computing needs for low latency, persistence connectivity, data filtering at the edge, security, and compliance with the regulatory environment. Providing these capabilities at scale across the heterogeneous edge is a significant transition for the industry.
LR: How is Intel committed to nurturing the marriage of IoT and AI?
SG: Intel is nurturing the marriage of IoT and AI through developer tools such as the Intel® Media SDK and the OpenVINO™ toolkit. For example, the OpenVINO™ toolkit enables a developer to deploy deep learning at the edge by writing once in C++ or Python and deploying it across CPUs, integrated graphics, FPGAs, and VPUs. It supports the major frameworks such as TensorFlow®, Caffé, MXNet, ONNX, and more. Intel provides sample code and pre-trained models through our model zoo to enable developers to innovate quickly across a leading set of deep learning computer vision use cases. You will see further releases with more hardware, sample code, and framework support. It just keeps getting better.
LR: What else has Intel done to assist developers in the learning curve associated with AI on the edge?
SG: To provide developers with an entry point into AI at the edge we launched the second version of the Intel® Neural Compute Stick. This enables developers to easily acquire an affordable development platform for prototyping AI at the edge. With the OpenVINO toolkit, developers can prototype on a Neural Compute Stick 2 and transition to a production offering through our AI: In Production ecosystem. For integrators and end users looking to see the benefits of AI today, we also showcase Intel® IoT Market Ready Solutions.
Solutions and Intel® IoT RFP Ready Kits are enabled with Intel® Vision Products that are now ready to deploy across Manufacturing, Health, Transportation, Smart City, and Retail. We are building a community of developers by providing open scalable tools and offerings to expedite their path from prototype to production.
LR: According to an Intel® press release online, “The Joint Edge Computing Platform was recently deployed in Chongqing Refine-Yumei Die Casting Co., Ltd. (Yumei) factories and was able to increase defect detection speed five times from manual detection to automatic detection.”[vi] Can you expand on this example of how IoT Edge computing is changing industry on a global scale?
SG: A common product in aluminum casting is automotive parts, where quality assurance is critical. Humans inspect for defects with their eyes. Factory workers must wait for the aluminum casting to cool down for inspection which is both dangerous and time-consuming labor. Utilizing robotics and computer vision-based defect detection, YuMei’s capability increased five times from manual detection to automatic detection. This is a compelling success story in the industrial sector reaffirms our commitment to enable enterprises to achieve efficiency improvements, to gain actionable insights, and capture market opportunities ahead of their competition with edge computing.
LR: Open source software started a boom in technology with Linux, primarily because it was royalty free and freely accessible. For hardware, low cost equates to a large part of equitable accessibility, where creativity can flourish in embedded hardware and software. Intel is uniquely harnessing human creativity via a low-cost platform with free tools, excellent documentation, training, tutorials, and forums, wouldn’t you agree?
SG: Indeed, the accessibility and affordability of these technologies are critical. At Intel, our goal is to help the developer community unlock its potential by democratizing access to AI, through offerings like the Intel® Neural Compute Stick (NCS). We have already seen innovation flourish from NCS usage in a wide range of applications.
LR: How has the Neural Compute Stick impacted the world?
SG: The Neural Compute Stick is an affordable entry point for AI prototyping that scales to a diverse population of developers. For example, Peter Ma, an independent Intel Software Innovator, developed many offerings based on our Neural Compute Stick and won a number hackathons, winning hundreds of thousands of dollars. One example is skin cancer detection. Another one is clean water AI. With the sub-$100 neural compute stick available in online retail and a free OpenVINO toolkit, this enables developers to prototype to detect skin cancer, assess honeybee populations, deliver clean water, and many more incredible examples.
LR: Tell us more about Peter Ma’s skin cancer detection unit using the Neural Compute Stick.
SG: One of the biggest hurdles for AI is having enough computing power at the edge for developers of computer vision applications. Intel’s Neural Compute Stick 2 is designed to address this challenge. Peter Ma was able to utilize it to train and classify skin cancer types for detection in real time. When the potential for cancer is detected with a high confidence score, the user is given a suggestion to see a dermatologist. The developer community (rightfully so) is passionate about solving cancer detection challenges. This year, at the Intel booth at Embedded World, we showcase a demo on acute myeloid leukemia detection, prototyped on the Neural Compute Stick. It is just incredible what people can accomplish when provided the right technology for the job.
LR: How are developer’s tools evolving as technology has progressed all the way from developing on bare metal to being insulated from complexity via operating systems, to containerization?
SG: These are exciting and interesting times. Deep learning requires our processors and accelerator technologies to be hyper-optimized for performance. Developers are becoming more and more abstracted from the semiconductors due to the trends with virtual machines, containerization, and now Function-as-a-Service. Intel provides developer-friendly tools and a robust portfolio of hardware and software assets, easy-to-use integrated development environments, and reference designs to accelerate system and application development. This allows developers to be abstracted from unmanageable complexity, yet still get the benefits of performance and capabilities within that underlying semiconductor architecture. You will see the developer toolset change over time. What we call a “developer” today, just like a developer decade ago, is going to look a lot different. A lot of the new tools provided for computer vision are much easier to use, much more plug-and-play, and scalable. Across industries and job functions within the next decade, you’ll see an explosion of people using developer tools in their daily work, and many of these tools will leverage deep learning technologies.
LR: As an executive at Intel, you see the evolving technology at a higher level than those working in the trenches, solving individual problems. What would you say we will be looking at in AI 5 to 10 years from now, regarding the current context of our discussion here?
SG: First, it’s critical to attain learning from folks solving challenging problems, one line of code or installation at a time. So much of our strategy is adopted by learning how our vast partner ecosystem is deploying our technology. Over the next 5 to 10 years we’ll move toward a software-defined, more autonomous world, and will have connected a significant part of the physical world to the internet. This will enable transformation and innovation across industries and enrich our lives. AI, IoT, and edge computing are at the center of this transformation. To paraphrase Andy Grove, you can be the subject of a strategic inflection point or the cause of one; companies that embrace this transformation will thrive and others will falter.