print

Emerging Memory Types Headed for Volumes

After decades of R&D, two emerging memory types – the phase change memory-based 3D Xpoint, co-developed by Intel and Micron, and the embedded spin-torque transfer magnetic RAM (e-MRAM) from several foundries – are now coming to the market. One point of interest is that neither memory type relies on the charge-based SRAM and DRAM memory technologies that increasingly face difficult scaling challenges. Another is that both have inherent performance advantages that could extend their uses for decades to come.

3D XPoint is a storage class memory (SCM) based on phase-change that fits in between fast DRAM and non-volatile NAND; it is currently available in both SSDs and sampling in a DIMM form factor. David Kanter, an analyst at Real World Technologies (San Francisco) said the Optane SSDs are selling now but the DIMMs are shaping up to be “an early 2019 story” in terms of real adoption. “People are very excited about the DIMMs, including customers, software developers, the whole computer ecosystem. There is a lot of software development going on that is required to take advantage of it, and a lot of system companies are saying they can’t wait. They are telling Intel ‘give me the hardware.’”

“Intel is taking the long view” when it comes to 3D XPoint (the individual devices) and Optane (the SSDs and DIMMs), Kanter said. “This is a new technology and it is not a trivial thing to bring it to the market. It is a testament to Intel that they are taking their time to properly develop the ecosystem.”

However, Kanter said there is not enough public information about 3D XPoint DIMMs, including performance, price, power consumption, and other metrics. Companies that sell enterprise database systems, such as IBM, Microsoft, Oracle, SAP, and others, are willing to pay high prices for a storage-class memory solution that will improve their performance.The Optane DIMMs, according to Intel, are well-suited to “large-capacity in-memory database solutions.”

According to the Intel Web site, Optane DC persistent memory “is sampling today and will ship for revenue to select customers later this year, with broad availability in 2019.” It can be placed on a DDR4 module alongside DRAM, and matched up with next-generation Xeon processors. Intel is offering developers remote access to systems equipped with Optane memory for software development and testing.

Octane DIMM reaches ‘broad availability’ in 2019

Speaking at the Symposium on VLSI Technology in Honolulu, Gary Tressler, a distinguished engineer at IBM Systems, said “the reliability of 3D NAND impacts the enterprise,” and predicted that the Optane storage class memory will serve to improve enterprise-class systems in terms of reliability and performance.

The DRAM scaling picture is not particularly bright. Tressler said “it could be four years before we go beyond the 16-gigabit size in terms of DRAM density.” DRAM companies are eking out scaling improvements of 1nm increments,” an indication of the physical limitations facing the established DRAM makers.

Al Fazio, a senior fellow at Intel who participated in the memory-related evening panel at the VLSI symposia, and said that the early adopters of the Optane technology have seen significant benefits: one IT manager told Fazio that by adding a layer of Optane SSD-based memory he was able to rebuild a database in seconds versus 17 minutes previously. Fazio said he takes particular pride in the fact that, because of Optane, some doctors are now able to immediately read the results of magnetic resonance imaging (MRI) tests.

“An MRI now takes two minutes instead of 40 minutes to render,” Fazio said, adding that a second-generation of 3D Xpoint is being developed which he said draws upon “materials improvements” to enhance performance.

Chris Petti, a senior director of advanced technology at Western Digital, said DRAM pricing has been “flat for the last five to seven years,” making it more expensive to simply add more DRAM to overcome the latency gap between DRAM and flash. “DRAM is not scaling so there are a lot of opportunities for a new technology” such as Optane or the fast NAND technologies, he said. Samsung is working on a single-bit-per-cell form of Fast NAND.

In a Monday short course on emerging memory technologies at the Symposium on VLSI Circuits, Petti said the drawback to phase change memories (PCMs), such as 3D XPoint, is the relatively high write-energy-per-bit, which he estimated at 460 pJ/bit, compared with 250 pJ/bit for standard NAND (based on product spec sheets). In terms of cost, latency, and endurance, Petti judged the PCM memories to be in the “acceptable” range. While the price is five to six times the price-per-bit of standard NAND, Petti noted that the speed improves “because PCM (phase change memory) is inherently faster than charge storage.”

Source: Chris Petti, Western Digital, short course presentation at 2018 Symposium on VLSI Circuits

Phase-change materials, such as Ge2Sb2Te5, change between two different atomic structures, each of which has a different electronic state. A crystalline structure allows electrons to flow while an amorphous structure blocks the flow. The two states are changed by heating the PCM bit electrically.

Philip Wong, a Stanford University professor, said the available literature on PCM materials shows that they can be extremely fast; the latencies at the SSD and DIMM levels are largely governed by “protocols.” In 2016, a team of Stanford researchers said the fundamental properties of phase-change materials could be as much as a thousand times faster than DRAM.

In a keynote speech at the VLSI symposia, Scott DeBoer, executive vice president of technology development at Micron (Boise, Idaho), said “clearly the most successful of the emerging memories is 3D XPoint, where the technology performance has been proven and volume production is underway. 3D XPoint performance and density are midway between DRAM and NAND, which offers opportunities to greatly enhance system-level performance by augmenting existing memory technologies or even directly replacing them in some applications.”

Currently, the 3D XPoint products are made at a fab in Lehigh, Utah. The initial technology stores 128Gb per die across two stacked memory layers. Future generations can either add more memory layers or use lithographic pitch scaling to increase die capacity, according to Micron.

DeBoer noted that “significant system-level enablement is required to exploit the full value of 3D XPoint memory, and this ongoing effort will take time to fully mature.”

eMRAM Race Begins by Major Foundries

Magnetic RAM technology has been under serious development for three decades, resolving significant hurdles along the way with breakthroughs in MgO magnetic materials and device architecture. Everspin Technology has been shipping discrete MRAM devices for nearly a decade, and the three major foundries are readying embedded MRAM for SoCs, automotive ICs, and other products. The initial target is to replace NOR-type flash on devices, largely due to the large charge pumps required to program NOR devices which add multiple mask layers.

GlobalFoundries, which manufactures the Everspin discrete devices, has qualified eMRAM for its 22nm FD-SOI process, called 22FDX. TSMC also has eMRAM plans.

At the Symposium on VLSI Technology, Samsung Foundry (Giheung, Korea) senior manager Yong Kyu Lee described an embedded STT-MRAM in a 28-nm FDSOI logic process, aimed at high-speed industrial MCU and IoT applications.

Interestingly, Lee said compared with the bulk (non-SOI) 28-nm process, the FD-SOI technology “has superior RF performance, low power, and better analog characteristics than 28-nm bulk and 14-nm FinFET CMOS.” Lee indicated that the FD-SOI-based eMRAM would be production-ready later this year.

Samsung ported its STT perpendicular-MTJ (magnetic tunnel junction) eMRAM technology from its 28-nm bulk to its FD-SOI CMOS process. The company offers the eMRAM as a module, complementing an RF module. The “merged embedded STT MRAM and RF-CMOS process is compatible to the existing logic process, enabling reuse of IP,” he said.

Looking forward to the day when MRAM could complement or replace SRAM, Lee said “even though we have not included data in this paper, our MTJ shows a potential for storage working memory due to high endurance (>1E10) and fast writing (<30ns).

Beyond Embedded to Last Level Cache

As foundries and their customers gain confidence in eMRAM’s retention, power consumption, and reliability, it will begin to replace NOR flash at the 40-nm, 28-nm, and smaller nodes. However, future engineering improvements are needed to tackle the SRAM-replacement.

SRAM scaling is proving increasingly difficult, both in terms of the minimum voltages required and the size of the six-transistor-based bits. MRAM researchers are in hot pursuit of the ability to use replace some of the SRAM on processors with Last Level Cache (LLC) iterations of magnetic memory. These LLC MRAMs would be fabricated at the 7nm, 5nm, or beyond nodes.

Mahendra Pakala, senior director of memory and materials at the Applied Materials Advanced Product Technology Development group, said for eMRAM the main challenges now are achieving high yields with less shorting between the magnetic tunnel junctions (MTJs). “The big foundries have been working through those problems, and embedded MRAM is getting closer to reality, ramping up sometime this year,” he said.

For LLC applications, STT-MRAM has approached SRAM and DRAM performance levels for small sample sizes. At the VLSI symposium, researchers from Applied Materials, Qualcomm, Samsung, and TDK-Headway, all presented work on SRAM cache-type MRAM devices with high performance, tight pitches, and relatively low write currents.

Applied’s VLSI symposium presentation was by Lin Xue, who said the LLC-type MRAM performance is largely controlled by the quality of the PVD-deposited layers in the MTJ, while yields are governed by the ability to etch the MTJ pillars efficiently. Etching is extremely challenging for the tight pitches required for SRAM replacement, since the tight-pitch MTJ pillars must be etched without redepositing material on the sidewalls.

Caption: Lin Xue, et al, Applied Materials presentation at 2018 Symposium on VLSI Technology

Deposition is also difficult. The MTJ structures contain multiple stacks of cobalt and platinum, and the thickness of the multilayers must be reduced to meet the 7nm node requirements.  Any roughness in the interfaces creates secondary effects which reduce perpendicular magnetic anisotropy (PMA). “The performance is coming from the interface, essentially. If you don’t make the interface sharp, you don’t end up with the expected improvement in PMA,” Pakala said.

Applied has optimized a PVD process for deposition of the 15-plus layers of many different materials required for the magnetic tunnel junctions. Pakala said the PVD technology can sputter more than 10 different materials. The Endura-based system uses a multi-cathode approach, enabling each chamber to have up to five targets. With a system of seven chambers, companies can deposit the required variety of materials and, if desired, increase throughput by doubling up on the targets.

The system would include a metrology capability, and because the materials are easily oxidized, the entire system operates at vacuum levels beyond the normal 10E-8 Torr level. For MRAM deposition, operating at 10 to minus 9 or even 10 to minus 10 Torr levels may be required.

“When we start talking about the 7 and 5 nanometer nodes for SRAM cache replacement, the cell size and distances between the bits becomes very small, less than 100 nm from one MTJ to another. When we get to such small distances, there are etching issues, mainly redepositing on the sidewalls. The challenge is: How do we etch at reduced pitch without shorting?” Pakala said.

“Integrated thermal treatment and metrology to measure the thicknesses, all of which has to be done at extremely low vacuum, are major requirements,” he said.

“At this point it is not a question of the basic physics. For MRAM, it is, as they say, ‘just engineering’ from here on out,” he said.

 

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.