The Future of Machine Learning: Neuromorphic Processors



Neuromorphic Engineering results as data- and power-hungry conventional machine learning architectures shift from the cloud to power- and data-efficient machine learning in mobile and edge devices.

What applications are enabled by low-power neuromorphic processors?

How do neuromorphic applications provide potential solutions to the challenges of machine learning in the cloud, including latency, security, power consumption, and reliance on “always on” hardware?

Machine learning has emerged as the dominant tool for the implementation of complex cognitive tasks resulting in machines that have demonstrated, in some cases, super-human performance. However, these machines require training with a large amount of labelled data and this energy-hungry training process has often been prohibitive in the absence of costly super-computers. The ways in which animals and humans learn is far more efficient, driven by the evolution of a different processor in the form of a brain that simultaneously optimizes energy of computation with efficient information processing capabilities[1]. The next generation of computers, called neuromorphic processors, will strive to strike this delicate balance between efficiency of computation with the energy needed for this computation.

The foundation for the design of neuromorphic processors is rooted in our understanding of how biological computation is very different from the digital computers of today (Figure 1). The brain is composed of noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three-to-four bits of information at each synapse[2]. It is well known that the brain operates using a plethora of brain rhythms but without any global clock (i.e., clock free) where the dynamics of these elements operate in an asynchronous fashion[3,4].

Figure 1: Comparing digital computer and brain computation.

Since brain synapses are fully distributed and are implicated in memory, this implies that, in general, there is no single synapse or single neural firing activity that corresponds to an item or concept, and so they are symbol free[4].The effective integration of heterogeneous and non-local sources of information for multiple goals is a hallmark of human-like cognition[5,6].The brain thus operates in a grid-free fashion. Finally, the neuronal interactions are scale free in the sense that it can span from a few neurons to the entire brain, depending on the context in which it operates[6].

At Eta Compute, we are focused on exploring the design and architecture for the next generation of machine learning algorithms and on hardware systems that are based on a neuromorphic processor. We have currently designed our first-generation Artificial Intelligence (AI) chip that operates using our proprietary Delay Insensitive Asynchronous Logic Technology (DIAL™), dynamic voltage frequency scaling, and near-threshold voltage operation into our DIAL[3] ECM3531[7,9] chip. The MCU uses an Arm Cortex-M3 and operates below 1 MHz to over 100 MHz while consuming single-digit milliwatts of power. This award-winning technology[10] was demonstrated for the design and implementation of a complex image classification neural network on a dual-core Arm Cortex-M3 and DSP SoC operating at 40 MHz clock frequency, which is a significant step in the realization of intelligent and power-efficient edge nodes. A previous Arm publication[11] describes the implementation of the same CNN on a Cortex-M7 running at 216 MHz inferencing at 13 mJ per image, while our implementation consumes only 0.4 mJ per image; a 30X reduction in energy consumption. This technology, with its custom kernel optimizations, and using a tightly integrated DSP processor and microcontroller architecture, will enable embedded machine intelligence for a wide range of applications.

At the same time, the technology is also capable of supporting models built on spike-based computations that enable efficient extraction and representation of useful information from high-dimensional data such as images (Figure 2). This efficient-extraction capability has been exploited to recognize items (images or audio) in an energy-efficient manner[7,12]. The aforementioned solution can support a wide range of applications in audio, video, and signal processing where power is a severe constraint, as it is in mobile devices, Unmanned Aerial Vehicles (UAVs), the Internet of Things (IoT), and wearable markets. Furthermore, for real world scenarios for which readily labeled data is scarce or unavailable, Eta Compute’s spike based autonomous learning algorithms can extract actionable intelligence despite this limitation.

Figure 2: Spike-based representation of information is efficient. (a) an example Cheetah image where fewer than 1,000 spikes are needed to learn that the data is an image of Cheetah as opposed to 100,000 pixels in the original image. (b) an example of audio processing where the network learns to produce just a spike at the beginning of a word “Smart” expressed as input spikes. This parsimonious representation of audio samples is another hallmark of spike-based neuromorphic processors.

A recent advance in sensor technology-based events[13,14]suggests that acquiring temporally precise spikes from a visual and audio sensor is now readily available. The creation of these visual and audio sensors was inspired by how the retina and cochlea process signals from the environment. They are better than conventional sensors in many respects. For example, these sensors provide up to seventy percent more information than conventional images from a frame-based sensor used in standard artificial vision. Furthermore, the response times for these new sensors are much faster, thus enabling the detection of features that cannot even be sensed with regular cameras. We believe that combining these precisely timed spike events from an event sensor with the efficiency of our processor in computing and learning will result in dramatically lower energy and high throughput in next generation vision and audio-based neuromorphic systems, especially for edge devices that are limited in both memory and power resources.

Sources predict that there will be over 25 billion edge devices in use by 202015. Eta Compute believes that neuromorphic technology will play a key role in enabling intelligent edge devices. By being able to learn and process sensory data directly at the edge in a power-efficient manner, we will provide relief to bandwidth requirements that are needed to send raw data to a cloud-based learning service. At the same time, obviating the need to transmit data to and from the cloud will lead to less exposure to data, thereby providing a natural solution to keeping the process private and secure.

The power efficient nature of neuromorphic processors will also enable “always on” solutions for edge devices that will make the decision making of these devices more agile without suffering from handicaps resulting from power requirements. We believe that the advent of these new generations of processors will lead to a potential shift from data- and power-hungry conventional machine learning in the cloud to power- and data-efficient machine learning at the edge.

Resources

  1. N. Srinivasa, On the Link between Energy and Information for the Design of Neuromorphic Systems, Sensors Magazine, https://www.sensorsmag.com/embedded/link-between-energy-and-information-for-design-neuromorphic-systems
  2. A. B. Barrett, M. C. W. van Rossum, “Optimal Learning Rules for Discrete Synapses,” PLoS Computational Biology 4(11): e1000230. doi:10.1371/journal.pcbi.1000230, 2008.
  3. A. Renart, J. De la Rocha, P. Bartho, L. Hollender, N. Parga, A. Reyes, K. D. Harris, “The asynchronous state in cortical circuits,” Science, 327, pp. 587–590, 2010.
  4. G. Buzsaki, Rhythms of the Brain, Oxford University Press, USA, 2009.
  5. Edelman, G. M., The Remembered Present: A Biological Theory of Consciousness, Basic Books, NY, 1989.
  6. W. J. Freeman, “A field-theoretic approach to understanding scale-free neocortical dynamics,” Biological Cybernetics, 92(6):350-359, 2005.
  7. N. Srinivasa and G. Raghavan, “Micropowered Intelligence for Edge Devices”, Embedded Computing Design, October 2018. http://www.embedded-computing.com/guest-blogs/micropower-intelligence-for-edge-devices
  8. T. R. Halffill, “Clockless Cortex M3 Cuts Power,” Linley Report, April 2018.
  9. B. Wheeler, “Eta Compute MCU puts AI in IoT,” Linley Report, October 2018.
  10. https://globenewswire.com/news-release/2018/10/29/1638338/0/en/Eta-Compute-s-Machine-Learning-Platform-Comes-Home-a-Double-Award-Winner-from-Arm-TechCon-2018.html
  11. L. Lai, N. Suda and V. Chandra, “CMSIS-NN: Efficient Neural Network Kernels for ARM Cor-tex-M CPUs,” Jan 2018. axXiv:1801.06601v1.
  12. S. K. Moore, “Eta Compute Debuts Spiking Neural Network Chip for Edge AI”, IEEE Spectrum, October 2018. https://spectrum.ieee.org/tech-talk/semiconductors/processors/eta-compute-debuts-spiking-neural-network-chip-for-edge-ai
  13. P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circ. 43, 566–576, 2008.
  14. S. Liu, A. Van Schaik, B. Minch, and T. Delbruck, “Event-based 64-channel binaural silicon cochlea with Q enhancement mechanisms,” in Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS), 2027–2030, 2010.
  15. Gartner Says 4.9 Billion Connected “Things” Will Be in Use in 2015, https://www.gartner.com/newsroom/id/2905717.

Narayan Srinivasa, Ph.D., CTO of Eta Compute, is an expert in machine learning and neuro-morphic computing and its applications to solve real world problems. Prior to Eta Compute, Dr. Srinivasa worked at Intel Labs as Chief scientist and Senior Principal Engineer, leading the work on neuromorphic computing. Before Intel Corporation, he was the director for the Center for Neural and Emergent Systems at Hughes Research Laboratories. Over an 18-year period, he served in various capacities including the principal investigator for DARPA programs SyNAPSE, Physical Intelligence, UPSIDE, and others. Dr. Srinivasa has 56 issued patents and has published more than 94 articles in peer-reviewed journals and conferences. Reports about his work have appeared in The Economist, MIT Technology Review, Wired, and Forbes. Dr. Srinivasa has a Ph. D. from the University of Florida and was a Beckman Post-Doctoral Fellow at the University of Illinois at Urbana-Champaign.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.