print

AI needs memory

Artificial intelligence, which is extremely useful for analyzing large amounts of data (think image processing and natural language recognition), is already impacting every aspect of our lives. Products being made today are being redesigned to accommodate some form of intelligence that it can adapt to the preferences of the user. Smart speakers integrating Alexa or Siri are perhaps the best examples in the home and office, but there’s huge value in AI for businesses. “AI is so fundamental to improving what we expect of devices and their ability to interpret our needs and even predict our needs, that’s something that we’re going to see more and more of in the consumer space. And then of course in the industrial environments as well,” notes Colm Lysaght, vice president of corporate strategy at Micron Technology. “Many different industries are working and using machines and algorithms to learn and adapt and do things that were not possible before.”

There are various ways to crunch this data. CPUs work very well for structured floating point data, while GPUs work well for AI applications – but that doesn’t mean people aren’t using traditional CPUs for AI. In fact, AI is being implemented today with a mix of CPUs, GPUs, ASICs and FPGAs. Data crunching also needs a lot of memory and storage.

A new report by Forrester Consulting, commissioned by Micron, takes a look at how companies are implementing AI and the hardware they are using, with a special focus on memory and storage.

Forrester conducted an online survey and three additional interviews with 200 IT and business professionals that manage architecture, systems, or strategy for complex data at large enterprises in the US and China to further explore this topic. Here are their key findings:

  • AI/ML will continue to exist in public and private clouds. Early modeling and training on public data is occurring in public clouds, while production at scale and/or on proprietary data will often be in a private cloud or hybrid cloud to control security and costs.
  • Memory and storage are the most common challenge in building AI/ML training hardware. While the CPU/GPU/custom compute discussion received great attention, memory and storage are turning out to be the most common challenge in real world deployments and will be the next frontier in AI/ML hardware and software innovation.
  • Memory and storage are critical to AI development. Whether focusing on GPU or CPU, storage and memory are critical in today’s training environments and tomorrow’s inference.

“AI is having a very large impact on society and it is fundamentally rooted in our technology. Many different applications, all of which are interpreting data in real time, need fast storage and they need memory,” Lysaght said. “At Micron, we’re transforming the way the world uses information to enrich our lives.“

To get to the next level in performance/Watt, innovations being researched at the AI chip level include:    low precision computing, analog computing and resistive computing. This will require some new innovation in design, manufacturing and test. That’s the focus of The ConFab, to be held May 14-17 at The Cosmopolitan of Las Vegas (see www.theconfab.com for more information).

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.