Posts Tagged ‘top-story’

Next Page »

How the New Data Economy is Driving Memory to 3D NAND

Monday, December 3rd, 2018

In the past, growth has primarily been driven by processors with architecture that has been focused on computation. However, the next generation of computing has shown itself to be more memory-centric, with a lake of data in the center of processing power.

Semiconductor markets are seeing the emergence of memory-centric computing. The current trends of the data economy and Artificial Intelligence (AI) seem to be driving growth in memory. Backing this up, SEMI (semi.org), a global industry association serving the manufacturing supply chain for the electronics industry, has increased its worldwide semiconductor revenue forecast for the latter half of 2018 to 15 percent higher than 2017 revenues,rather than the original 7.5 percent higher. The growth engine for chip revenue has been processors with computation-focused architecture—think desktops and supercomputers. This type of processor is physically placed in the middle of the cluster on the die and interfaces with the southbridge, while the northbridge has interfaces to the CPU, DRAM, PCIe, and to the hard drive or a Solid-State Drive (SSD). However, the next generation of computing is proving to be more memory-centric, with a lake of data in the center of processing power.

New Data Economy Brings Changes
Cloud services like Amazon Web Service (AWS) access a centralized “data lake” that is easily 100 or 200 TB of DRAM. According to AWS, a data lake “allows you to store all your structured and unstructured data at any scale. You can store your data as-is, without having first to structure the data.”[i] The surrounding processors access the data lake for different purposes. GPUs might be tuned for machine learning, FPGAs might be accelerating other processes, and other processors might just be driving data traffic. However, the relatively new data economy and cloud-related elements such as the data lake architecture are one of the factors that have been propelling the growth in memory sales over the last two or three years. DRAM prices increased dramatically due to shortages in 2018, although DRAMExchange indicates that memory prices may drop by as much as 15% – 25% next year as a reaction to the oversupply.[ii]

Figure 1: Market CAGRs of Major Product Categories (2018 – 2023F) IC Insights projects Memory to have the highest Compound Annual Growth Rate (CAGR) of all ICs. (©IC Insights)

Mike Howard, VP of DRAM and Memory Research and Walt Coon, VP of NAND and Memory Research, both at Yole Developpement, maintain that “In the last two years, the DRAM and NAND memory business hit record-high revenues. The industry announced an impressive 32% CAGR between 2016 and 2018, with revenue growing from US$ 77 billion to an estimated US$ 177 billion.”[iii] Both DRAM graphics cards and servers were the cause of the high demand for DRAM in the recent past. The latest innovation in memory is 3D NAND, in which memory cells are stacked in multiple layers. 3D NAND memory is enabling a new storage solution primarily due to all the data that we are creating, saving, and analyzing. Up to recent times, storage has been about hard drives versus flash memory or SSD. Flash memory is fast, but it is more expensive, whereas the hard drive is slower but less expensive. 3D NAND technology is closing the cost gap rapidly.

Two years ago, the industry started with eight layers and then moved quickly to 32 layers. Thus, for the same area of silicon, you have quadrupled outputs. Today, the industry mainstream is 64 layers for 3D NAND, and the industry has begun on 96 layers of 3D NAND. Therefore, in the same area of the silicon where fabs used to make one transistor, they can now craft 96 stacked transistors.

The cost and the performance of flash memory are progressing rapidly and approaching the value of hard disk drives. The replacement of hard drives and the demand for more data storage are the present drivers of 3D NAND growth. Altogether, logic is progressing, memory has entered the center stage, and storage is transitioning from mechanical storage to digital storage. 3D NAND growth is lowering the cost of and improving the performance of flash memory.

Challenges still exist for memory, however. As Yole’s Coon and Howard state, “From a technological perspective, it continues to get more and more difficult to grow bit output on the wafer, which is a key for driving down cost per bit for both DRAM and NAND. The former is constrained by lithography shrinks, while the latter is constrained by limits on 3D stacking and wafer throughput losses as wafer processing time has increased significantly due to the transition from planar (2D) to 3D NAND.” [ii]

An area that most ignore include legacy nodes or anything above 20nm. Currently, legacy node chips are growing in importance because two of the fastest growing sectors for the semiconductor industry are automotive and industrial. The automotive and industrial sectors do need some CPUs for processing, but they need many more inexpensive sensors, microcontrollers, and power management devices. Five years ago, German automakers had maybe 300 integrated chips in each automobile. Today, a high-end German car can have as many as 8,000 integrated chips. Since the majority are sensors and such, the automotive sector is driving the legacy (or mainstream) nodes.

Cisco’s Visual Networking Index: Forecast and Trends, 2017–2022 predicts almost a 3x global increase in IP traffic in just five years. Of that traffic, 46% will be mobile traffic.[iv] Flash memory has enabled traffic growth and is replacing inferior storage media.

One example of the growth in data created by, used by, and stored by society can be seen in the progression of available storage mobile phones. Consumers used to buy the iPhone with 8 GB of NAND memory. Then 16 GB became the premium version for iPhone, whereas today most are buying iPhones with 128 to 256 GB of NAND memory. The Samsung Galaxy Note 9 is available with one terabyte of NAND memory. Data creation is the fundamental driver for many of the present industry trends.

With so much data, we need to analyze and discard the garbage, because not much can be done with too much data. The need for data analysis leads to the need for artificial intelligence and data science. All of the above interact with each other and create a very healthy industry.

Mike Howard

Mike Howard is member of the memory team at Yole Développement (Yole) as VP of DRAM and Memory Research. Howard’s mission at Yole is to deliver a comprehensive understanding of the entire memory and semiconductor landscape (with special emphasis on DRAM) via market updates, Market Monitors, and Pricing Monitors. He is also deeply involved in the business development of all memory activities. Howard has a deep understanding of the DRAM and memory markets with a valuable combination of industry and market research experience. For the decade prior to joining Yole, Howard was the Senior Director of DRAM and Memory Research at IHS. Before IHS, Howard worked at Micron Technology, where he had roles in corporate development, marketing, and engineering.
Howard earned a Master of Business Administration at Ohio State University (United-States), a Bachelor of Science in Chemical Engineering, and a Bachelor of Arts in Finance at the University of Washington.

Walt Coon

Walt Coon joined Yole Développement’s memory team as VP of NAND and Memory Research, part of the Semiconductor & Software division. Walt is leading the day-to-day production of both market updates, Market Monitors and Pricing Monitors, with a focus on the NAND market and semiconductor industries. In addition, he is deeply involved in the business development of these activities. Coon has significant experience within the memory and semiconductor industry. He spent 16 years at Micron Technology, managing the team responsible for competitor benchmarking, and industry supply, demand, and cost modeling. His team also supported both corporate strategy and Mergers & Acquisitions analysis. Previously, he spent time in Information Systems, developing engineering applications to support memory process and yield enhancement.

Coon earned a Master of Business Administration from Boise State University (Idaho, United-States) and a Bachelor of Science in Computer Science from the University of Utah (United-States).


Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades.

[i] https://aws.amazon.com/big-data/datalakes-and-analytics/what-is-a-data-lake/

[ii] https://www.dramexchange.com/WeeklyResearch/Post/2/5145.html

[iii] https://www.i-micronews.com/memory/12671-memory-business-what-s-next-interview-by-yole-developpement.html

[iv] https://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white-paper-c11-741490.html#_Toc529314186

Write Endurance to Write Home About

Wednesday, August 29th, 2018

Radiation tolerance, power efficiency, and fast write performance also characterize F-RAM non-volatile storage technology,

Ferroelectric Random Access Memory (F-RAM) is a non-volatile storage technology that offers low power, fast write performance, and a greater write endurance when compared to EEPROM or flash technologies. For example, the write endurance of F-RAM from Cypress Semiconductor is 10^14 (100 trillion) write cycles. Presuming the device takes 4 ms to rewrite every cell, it would take a minimum of 126 years for a failure to occur. However, EEPROM and NOR Flash have write endurance of just 10^6 (1 million) write cycles. Additionally, F-RAM data retention is very robust, supporting a minimum of 10 years, and more than 121 years of data retention at + 85 °C, depending on the individual product.

Figure 1: Technologic Systems is now offering single board computers with an added Ferroelectric Random Access Memory (F-RAM) from Cypress Semiconductor.

The high-speed nature of the device combined with its non-volatility and data retention makes this memory device useful in many applications. The F-RAM used in Technologic Systems’ products is an AT25 compatible SPI device. The TS-7553-V2 board support package implements the F-RAM as an extra EEPROM-like memory and presents the whole device as a flat file.

Use Cases for F-RAM
Attached as a simple memory device, F-RAM can be used to store any data an application may need to retain in either traditional RAM or non-volatile memory. Having a secondary memory device can be useful for applications such as per-unit configuration/calibration data, non-secure storage of unique serial numbers or IDs, and boot flags. Applications that can take advantage of the high write endurance of F-RAM memory include temporary storage that needs to remain non-volatile, counters that require frequent writes, or data logging applications that need to store local data logs that will be read or transmitted later.

F-RAM has also proven to be highly tolerant to radiation effects that may be experienced in airborne and space applications.  Typical memories retain state using various techniques that are susceptible to corruption from alpha particles, cosmic rays, heavy ions, and gamma- and x-rays. For instance, Dynamic Random Access Memory (DRAM)  stores charge in a capacitor. Static Random Access Memory (SRAM) sets a latch. And EEPROM typically stores charge in an insulated floating gate. Each of these can experience a bit-flip, or soft error, as a result of a radiation event. Because the F-RAM cell stores the state as a lead zirconate titanate (PZT) polarization, an alpha particle or heavy ion hit is very unlikely to cause the polarization to change a given cell’s state, resulting in a robust tolerance to these events.

As the number of high altitude and low earth orbit projects continues to grow, especially for small-sat/cube-sat style projects, single board computers will rely on serial I/O boot code and data logging memories. F-RAM is the robust choice for these harsh environments.

Construction and Operation
The ferroelectric in F-RAM refers to a type of crystal material that is made of tiny electric dipoles where the positive and negative charges have a slight separation. This electric polarization is a natural spontaneous state of particular types of crystals and can be controlled via the application of an external electric field. The ferroelectric property is a phenomenon observed in a class of materials such as the PZT used in Cypress’ F-RAM (Figure 2).

Figure 2: Ferroelectric PZT Crystal

Applying an electric field across the crystal causes the position to be aligned in the direction of the field and, conversely, when the electric field is reversed the crystal will be aligned in the opposite position. The polarization will remain in its last state until an electric field is applied, allowing reliable, non-volatile memory to be constructed from these crystals.

F-RAM Operation
FRAM is constructed of ferroelectric crystals used as a capacitor dielectric in a structure similar to DRAM. Where flash or EEPROM technologies rely on a charge being trapped on a floating gate, DRAM captures its charge in a capacitor. The capacitor in the FRAM cell uses a ferroelectric material, typically lead zirconate titanate, as the dielectric. By using a ferroelectric polarized dielectric, the cell is able to maintain its current state without the constant charge current required by DRAM, and the state is able to persist even without power.

The ferroelectric capacitor symbol (Figure 3) indicates that the capacitance is variable and is not a traditional linear capacitor. If a ferroelectric capacitor is not switched when an electric field is applied (no change in polarization state), it behaves like a linear capacitor. If it is switched, there is an additional charge induced, therefore, the capacitance must increase. The ferroelectric capacitor is combined with an access transistor, a bit line, and a plate line to form the memory cell as shown in Figure 3.

Figure 3: F-RAM Memory Cell

 

 

 

 

 

 

 

 

 

 

 

 

 

The operation of an F-RAM cell is also similar to DRAM. Writing is accomplished by applying a field across the ferroelectric layer in the capacitor by charging the plates on either side of it. The direction of the charge orients the dipoles in the ferroelectric layer one way or the other. This is how logical “0” and “1” are represented.

A read operation is destructive to both DRAM and F-RAM. This means that after a read a rewrite must occur for the data to remain in the cell. In a DRAM read, the cell is drained to a sense amplifier. Next, it is determined if the cell contained a charge or not.

F-RAM is read by setting the cell to a known logical state, for example a “0.”  If the cell is already in this state, then the output from the cell shows no change, and it is known that the cell was in a “0” state. However, if the cell is in the opposite state, the output from the cell is a brief current pulse as electrons are pushed out during the ferroelectric polarity switch. The internal controller will then reset the cell to the correct value to be stored back, and the read data is transferred out of the IC.

F-RAM Benefits Over Other Non-Volatile Memory
Traditional writable non-volatile memories that use floating gate technology such as EEPROM or Flash use charge pumps to develop high voltages on-chip (10 V or more) thus forcing carriers through the gate oxide. With charge pumps there are long write delays and high power consumption for write or erase operations. Additionally, the write operation is destructive to the memory cell, which limits the operating life. F-RAM’s crystalline structure does not have this sort of wear out mechanism from forcing carriers through oxides, ensuring a longer operating life. Also, the switching speed of the F-RAM’s crystalline structure supports an “instant write,” guaranteeing that when data is presented to the device the data is stored without any internal chip delay. This eliminates “data at risk” in case of a sudden power loss. The mechanism used to apply an electric field to the F-RAM structure also requires significantly less power than the power used by the charge pumps in floating gate technologies.


Kris Bahnsen is a Software Engineer at Technologic Systems. Eliza Schaub is a Hardware Design Engineer at Technologic Systems.

 

Memory Class Storage for Embedded Applications

Tuesday, July 10th, 2018

Why a DRAM replacement with non-volatility bodes well for the memory roadmap’s future

Introduction
All indications point to a dramatic shift in the availability of new memory architectures, making the next couple of years perhaps the most dramatic in many decades. Articles seem to pop up daily on the emerging memory options, from PCM to ReRAM to MRAM to 3DXpoint. While each of these has its unique characteristics, they have one thing in common that distinguishes them from DRAM: non-volatility. The industry has introduced the term “Storage Class Memory” or SCM to encapsulate these various technologies, however lumping all emerging technologies into a single category does the industry a bit of a disservice. None of these SCMs provides a feature set even close to 100% compatibility with a DRAM standard interface, and they are in many ways more competition for Flash than DRAM. There is a need for an additional standard phrase for DRAM replacement technologies that can offer 100% compatibility with DRAM while still providing persistent memory, which we call “Memory Class Storage.”

Figure 1: Memory Hierarchy

The Deterministic DRAM Interface
The DRAM interface is quite demanding. Characteristics of the DRAM interface include reasonably high frequency, currently around 1600 MHz for a 3200 Mbps data rate, coupled with a short access time of roughly 15 ns and an overall cycle time around 45 ns. While there is some flexibility in these numbers, the range of acceptable solutions tends to be pretty tight, within a few nanoseconds of these values. Having been designed around the electrical characteristics of the embedded transistor plus capacitor memory cell of a DRAM, the key factor is the deterministic requirement that reads and writes will deliver good data in the access time and complete background bookkeeping in the cycle time.

This determinism is the primary difference between a Storage Class Memory (SCM) and Memory Class Storage (MCS). In order for a device to directly replace a DRAM, it needs to meet all the deterministic requirements of a DRAM. While the universe of Storage Class Memories offers a variety of features, none have all the required features of DRAM. FeRAM comes close with its high write endurance ability, but it suffers from access time limitations, limited scalability, and very high cost. ReRAM comes close in terms of cost, but its slow access time and especially its limited write endurance prevent deterministic operation. MRAM shows promise in terms of access time. But MRAM will have challenges meeting the power envelope of a DRAM, and of course with converging on DRAM-like pricing.

Enabling Non-Deterministic Memories
There are some efforts under way to change the protocol to allow non-deterministic memories to share the DRAM bus. 3DXpoint requires some proprietary hooks to allow it to coexist with DRAM, and these tricks offset some of the performance penalties associated with long write times and low write endurance, which require the 3DXpoint to go offline periodically for maintenance such as wear leveling. The NVDIMM-P protocol is another effort to allow non-determinism on the DRAM channel, however NVDIMM-P requires an expensive, complex centralized controller to filter all data traffic and break it into read and write credit packets that may be serviced in or out of order.

Figure 2: Non-deterministic Protocol

The main takeaway from this line of analysis is that none are a DRAM replacement. The tuned performance of the native DRAM interface is fundamentally faster than the non-deterministic alternatives. All SCMs require these compromises in order to populate the DRAM channel, but these compromises also affect system architecture, and especially software, in some very unpleasant ways.

The big problem with incorporating the SCM protocols into a system designed for DRAM is that the performance of the subsystem becomes very asymmetric. For all its other problems, DRAM at least has a very predictable nature, but all that goes away with SCM protocols. Individual processor threads get stalled whenever an SCM goes into housekeeping mode, and overall system performance is reduced. Software must be rewritten to comprehend such asymmetry, and few companies are willing to change billions of lines of tested code to tune for an asymmetric memory subsystem.

While some systems will take advantage of direct access modes, such as DAX, a majority of systems are mounting SCM not as main memory but as disk storage. Since legacy software already prepared for potentially long delays going through the operating system disk drivers, performing an access to a non-deterministic SCM is much faster than a disk drive and they see some performance enhancement.

Memory Class Storage
Introducing Memory Class Storage changes the math again. Simply put, MCS is a DRAM replacement with non-volatility. To be considered MCS, such a technology must meet all DRAM timings in a fully deterministic way. MCS must have essentially unlimited write endurance to meet this requirement so that it never goes offline for maintenance.

Nantero NRAM™ is the world’s only Memory Class Storage device. NRAM meets all DDR4 timings and provides the required write endurance for truly deterministic performance. The power profile for NRAM is lower than that of DRAM, and it uses the same supply voltages for a true drop-in replacement memory.

The system may simply execute all functions with NRAM as though DRAM were installed. This allows for extremely simple integration into systems through true plug and play. However, this also leaves significant headroom on the table. Since NRAM is inherently persistent, all refresh operations that are required by a DRAM are no longer required. This alone can improve memory performance by 15% even at the same clock frequency. Similarly, NRAM has an inherently non-destructive read, therefore the DRAM precharge operation is not required, making additional command slots available to the controller for other uses.

MCS is More Than Just Another Memory
Memory Class Storage enables a whole new category of computing as well. Memory modules with MCS can directly replace NVDIMM-N memories, eliminating the need for bulky, expensive, and unreliable battery or supercapacitor backup hardware, and nullifying the time-consuming backup and restore procedures from power failure. With an MCS memory module, power can be removed at any time without affecting the validity of the memory content. When power is restored, system operation can continue from where it left off.

Figure 3: NRAM Memory Module Based on Industry Standard DIMM

From the system software perspective, MCS as main memory relieves all concerns about performance asymmetry. Data persistence is inherent in all operations, thereby eliminating the need to partition performance critical operations through the disk drive interface. Direct memory access becomes the normal mode of operation, increasing performance hundreds to thousands fold.

MCS in the DDR5 Age
Looking ahead to DDR5, there is a dark cloud on the horizon for DRAM in that it is planned to max out at 32 Gb per device just about half way into the DDR5 life cycle. At the same time, advances in embedded systems for artificial intelligence, deep learning, and similar applications are demanding more memory. NRAM can use a crosspoint architecture or a 1T-1R structure, both of which are far more space efficient than the transistor and capacitor combination of a DRAM cell. As a result, DDR5 NRAM will deliver at least 8 to 16 times the per-device capacity of DDR5 DRAM, or 256 Gb to 512 Gb, and at higher performance than an SDRAM.

Memory Class Storage may not displace Storage Class Memory, but instead may coexist. The non-deterministic protocols of 3DXpoint and NVDIMM-P will allow slower media to offer very high per-module storage capacity, similar to an SSD. Memory Class Storage, however, will offer the highest performance for main memory along with non-volatility that eliminates the need to back up main memory in case of power failure.

DRAM fades from memory (if you’ll forgive the pun) after 32 Gb. Fortunately, Memory Class Storage is coming to extend the main memory roadmap into the future.


Mr. Gervasi is Principal Systems Architect at Nantero, Inc. He has been working with memory devices and subsystems since 1Kb DRAM and EPROM were the leading edge of technology. He has been a JEDEC chairman since 1996 and responsible for key introductions including DDR SDRAM, the integrated Registering Clock Driver and RDIMM architecture, the formation of the JEDEC committee on SSDs, and actively involved in the definition of NVDIMM protocols.

The 5S’s of Secure Storage for Military Embedded Computing Systems

Monday, June 4th, 2018

As sensors and processing continue to do more in shrinking spaces, system architects are looking beyond just file and endpoint encryption.

How does a system architect design secure data storage into a military embedded system?  Software encryption comes to mind immediately for most people. Many robust file and endpoint encryption packages are easily and quickly upgradable and deployable across a large network of devices. However, software running on the operating system can be altered, hacked, and possibly even removed—leaving data vulnerable to adversarial capture. When mission critical, classified and top secret data needs protection, there are other far more secure methods to consider. Self-encrypting hardware provides the ultimate data security. In this article, we’ll explore the “five S’s” of secure data storage for military embedded systems: size, weight, and power (SWaP), Speed, Security, Sanitize, and Self-Destruct.

Data Protection Where SWaP is at a Premium
With the rise of unmanned and mobile communication systems in modern warfare, board and system architects are challenged to pack more sensor and processing capabilities into these SWaP-constrained environments. There is less room for conventional storage like hard disk drives with rotating magnetic media. Solid-state drives (SSD) using non-volatile NAND flash memory offer substantially higher sustained read and write speeds. The SSDs keep power consumption low, with the ability to customize consumption for each unique application. By utilizing advanced miniaturization and three-dimensional stacking technologies, high-speed, high-capacity, and low-power data storage can be realized in a variety of form factors, including ultra-compact ball grid array (BGA) packages. Military-grade BGA SSD devices incorporate both security and ruggedization, thereby providing assured reliability in harsh military environments. These secure SSD devices are the ideal solution for data protection in SWaP-constrained embedded systems such as avionics, unmanned vehicles, mobile communication systems, wearable man-packs, laptops, and tablets.

Figure 1: Mercury Systems’ TRRUST-Stor BGA is an example of successfully implementing the five S’s of secure data storage.

Avoiding Speed Degradation
Data speed is vital to most military embedded computing applications. However, using a virtual private network (VPN) or other encryption applications can slow computing functions, as these applications use the host CPU to encrypt every data packet. This approach consumes bandwidth and slows normal computer functions, including data acquisition. The same speed degradation occurs when file encryption or endpoint encryption software is employed. However, moving functionality from the software level and implementing it at the hardware level causes laptop or work station performance to be unaffected by the encryption process. A secure SSD has dedicated hardware to manage the encryption and decryption processes, leaving the performance of the host system uncompromised.

Military systems require reliable high-speed data transfer rates to capture, process, and disseminate sensor data in both benign and harsh environments. Read and write speeds of a military-grade SSD parallels that of non-encrypting commercial drives. When used in forward-deployed defense systems, military-grade SSD devices surpass their commercial counterparts with the ability to maintain sustained read/write operations rates during (1) extreme temperature exposure, (2) thermal shock conditions, (3) mechanical shock conditions, (4) high vibration conditions, or any combination of the above. Military-grade SSD devices are engineered with rugged enclosures, military-grade components, and NAND flash from trusted sources.

Two Independent Encryption Layers for Security
All secure SSD devices use cryptographic algorithms built into the controller to encrypt every bit of data stored. Most self-encrypting drives are designed with Advanced Encryption Standard (AES) 256-bit in XTS block cipher mode to protect data. With a high entropy key value, AES 256-bit XTS encryption is virtually impossible to break, even by the fastest supercomputers today.

Programs securing highly sensitive or classified data require assurance that the cryptographic algorithms have been correctly implemented. This assurance process is conducted through validation and certification at organizations such as the  National Institute of Standards and Technology (NIST) and the National Information Assurance Partnership (NIAP). These organizations oversee the Federal Information Process Standards (FIPS) that certify the proper implementation of encryption algorithms, key management, authentication algorithms, and the Common Criteria certification of encryption protection profiles. Hardware full disk encryption components obtaining these certifications can be eligible for the National Security Agency’s (NSA) Commercial Solutions for Classified (CSfC) program for the protection of classified, secret, and top secret data at rest.

The CSfC program provides solution-level specifications called Capability Packages (CP) to deliver data security solutions using a two-layer approach. In the Data at Rest (DAR) CP, data protection is accomplished by integrating an inner and outer layer of hardware and software encryption. The SSD device is the inner layer, while a file encryption or software full disk encryption solution is the outer layer. Two independent encryption layers eliminate the likelihood that a single vulnerability can be exploited in both security layers. Classified, secret, and top secret data can be safely stored if all of the CSfC program requirements are successfully validated per the CP criteria defined by the NSA, including using only hardware and software approved by the NSA that is on the NSA’s CSfC component list/.

Other security aspects for sensitive military applications should be considered. As a hypothetical scenario, consider a commercial SSD built with a controller designed and manufactured outside of the United States. This SSD is then integrated into the flight system of a military UAV. After integration into the platform, all quality checks have passed. The UAV’s flight system is operational. At a later time, this UAV is executing a mission where a terrorist training facility must be surveyed. As the drive’s total power-on time changes from 0200 to 0201 hours, a backdoor installed into the SSD’s controller is triggered. The flight system immediately shuts down. The mission is aborted, and the UAV is brought down in unfriendly territory. Sourcing an SSD with a NAND controller designed and manufactured in a domestic, trusted environment mitigates the risk of backdoors and unauthorized data access.

Figure 2: Mercury Systems’ ASURRE-Stor is the only Full Disk Encryption hardware eligible for the NSA’s CSfC program.

Fast Erase and Sanitize
As discussed, there are a number of advanced methods employed to secure data and eliminate the possibility of unauthorized access. However, there are scenarios when data must be rapidly wiped from the drive upon demand.

The fast erase and sanitization protocols integrated into military-grade SSDs address this scenario. The fast erase clears a drive’s encryption key within a fraction of a second and all NAND flash within a couple of seconds. Sanitization occurs when all blocks of the drive are erased and overwritten with random data as part of a  process that is repeated numerous times. This can take minutes to tens of minutes to complete depending on the number of overwrite operations.

This is best illustrated by considering another hypothetical scenario. An aircraft using a secure SSD is forced to land in a known hostile territory. As the pilot is landing the aircraft, she sees enemy soldiers approaching. She presses the sanitize button which initiates all sanitization protocols. Enemy soldiers detain her for questioning, while others search the aircraft for data storage devices. Once the SSD is found, it is transported to a high-tech analytical lab for data retrieval. The drives power on, but no useful data is found.

Readily Implementing Self-Destruct
In some military scenarios, it may be desirable to render the drive completely nonfunctional. Heat and chemical reactions are known mechanisms to physically destroy the memory cells of an SSD device, but this destruction mechanism can cause collateral damage if, for example, a fire spreads beyond the SSD device.

High-power magnetic exposure can be used to render conventional rotating media nonfunctional. However, this practice is not applicable to NAND flash. Even in the case of conventional rotating drives, such an approach may not be practical for forward-deployed embedded systems.

Non-thermal self-destruct mechanisms are the only way to ensure the safe destruction of the device without risking innocent life and inflicting collateral damage. Sophisticated implementations of non-thermal self-destruct can be readily implemented in state-of-the-art military-grade SSD devices. After a specified number of failed attempts to authenticate, the device can initiate the non-thermal self-destruction process. The device now has no strategic value to both friendly and adversarial forces.

Wrap Up/Conclusion
Challenges when designing truly secure data storage in modern military embedded systems can be solved with both hardware and software solutions. When considering the five S’s: SWaP, Speed, Security, Sanitize, Self-Destruct, no solution provides greater flexibility, reliability, and protection than a military-grade secure SSD.

Resources

White Paper: Safeguarding Mission Critical Data with Secure Solid State Drives.[1]

[1] http://info.mrcy.com/1703WPMSS-SafeguardingMissionCriticalData.html?utm_source=Embedded_Article


Jennifer Keenan is the Senior Product Marketing Manager for the Microelectronics Secure Solutions group of Mercury Systems in Phoenix, Arizona. She received her Bachelor of Science degree in Marketing from Florida State University in Tallahassee, Florida

“Zero Data at Risk”: Embedded Storage Q&A

Thursday, March 1st, 2018

Stored data enables the discovery of patterns and more to support key choices across
consumer, industrial, and medical markets, which is why top-notch security remains paramount.

Editor’s Note: Hiep Pham is Virtium’s VP of R&D. C.S. Lin is a marketing executive with Winbond Electronics Corporation America. Both spoke with EECatalog recently, responding to questions on embedded storage issues.

EECatalog: If decision makers and influencers are becoming clearer on the IoT versus Industrial IoT differences, what are the next ideas they need to understand in order to make their IIoT competitive?

C.S. Lin, Winbond Electronics Corporation America

C.S. Lin, Winbond Electronics Corporation America: Despite their common foundation—that of efficient machine-to-machine interaction—IoT and IIoT present starkly contrasting security demands. IoT, due to its predominantly consumer-focused applications, requires a comparatively modest level of security, whereas IIoT simply can’t be competitive in today’s market without stringent security to prevent or minimize the risk as it collects and stores crucial data used to gain new insights to make critical decisions. Ideally, that security would be “baked in”—that is, architected directly into IIoT components, such as flash memory that is indispensable for every system.

Hiep Pham, Virtium: IIoT designs need to be undertaken holistically. Rather than focus on facilitating simple commands between IIoT devices, the system-design process needs to focus on key values for IIoT end-points that are often deployed in hard-to-reach and harsh environments. These values include: extended life endurance, security, connectivity, remote manageability and most importantly, zero data at risk. Intelligent, secure storage with continuous, reliable data logging is an important element in maintaining zero data at risk.

EECatalog: What changes in the storage market over the past five years have you seen change customer behavior, and how have you adapted?

Hiep Pham, Virtium

Pham, Virtium: The storage market clearly has matured. Capacities keep experiencing huge leaps, while the number of consumer devices that use some form of flash storage seems limitless. But what’s truly astounding is how machine learning and artificial intelligence have impacted cloud storage and servers, as they enable IIoT systems to leverage not only a mature infrastructure of networked storage, but also data analytics at the network edge. This leads to improved system efficiency, performance, and the ability to monitor and manage critical functions at remote locations. This is the approach we’ve been taking in our StorFly-IoT network-connected platform.

Lin, Winbond: As evidenced by hundreds of Winbond customers’ products, code storage has taken on ever-increasing importance in system designs. This is because it moves essential code quickly between flash devices and other system elements, such as system memory. Of course, those customers are also putting a much higher importance on securing that code storage, so they’re relying more on security schemes such as our TrustME Secure Flash architecture. Integrated directly into select flash memories, the TrustME Secure Flash architecture enables trust by enhancing advanced Flash memory technology with smart card security techniques. It also adheres to the Arm Platform Secure Architecture, which clearly outlines a requirement for secure boot, root of trust, and secure storage.

EECatalog: How can storage solutions support such evolving technologies as human machine interactions and augmented reality to strengthen IIoT?

Pham, Virtium: The combination of a central cloud server with storage/processing and remote intelligent IIoT solid-state storage serves to reduce the human-machine interaction and shift toward machine-to-machine systems. So, in that sense, the way today’s storage solutions are evolving is by enabling humans to confidently rely on intelligent storage at the edge to help streamline machine-to-machine operations.

EECatalog: Why take the long view when it comes to ups and downs in the storage market?

Lin, Winbond: It’s impossible to not take the long view in flash memory, whether for consumer products, enterprise systems or industrial designs. To neglect looking forward risks product failure, if not that of the entire business. Winbond is constantly looking several steps ahead with its flash memories—beyond what our customers have in the works and toward where we see markets heading years down the road, such as Winbond’s 1.2V family of low-voltage serial flash products designed for long-view market demands. But again, a central requirement remains securing the data stored.

Pham, Virtium: The world is moving steadily toward machine learning and AI—so the reliability of data has never been more important. With all the ups and downs in the storage market, there’s one constant: the need for all data to be managed and stored securely. That’s been the case for years and we shouldn’t expect it to change any time soon. Virtium was built on the foundational belief that data needs to be both protected and accessible, regardless of the environment—the same view we’ve had since our founding more than 20 years ago.

EECatalog: What IIoT issues are not getting the attention they deserve, and how can that situation be remedied?

Pham, Virtium Some IIoT system designers would do well to be reminded that one size does not fit all. This is particularly true with SSDs; the IIoT application determines form factors, interfaces, capacities, power requirements, and ability to withstand shock, vibration and extreme temperatures. Having a broad range of SSD options specifically tailored for] industrial IoT, as well as very long product life cycles, is what we do—and that helps remedy these IIoT challenges.

Lin, Winbond: I don’t mean to beat this drum incessantly, but the need to design in adequate security simply can’t be overstated. More than ever, we live in a time in which data and code are currency—often worth more than the systems that store it. We at Winbond are working diligently to remedy this by partnering with our customers and organizations such as ARM to bring security flash memory the seriousness it deserves.

 

 

3D NAND: Can it Bridge the Endurance Gap?

Wednesday, November 22nd, 2017

Keeping close watch on durability and data protection as 3D NAND enters the Industrial IoT and M2M markets.

There’s presently a buzz surrounding 3D NAND and its growing acceptance in the industrial-embedded systems serving the Industrial IIoT (IIoT) and machine-to-machine (M2M) markets. That buzz isn’t just because of 3D NAND’s potential to dramatically boost capacities and reduce the space and power needed for solid-state storage, but also because it represents a major breakthrough in storage technology. This expansion of NAND technology into 3D production pushes flash capacities to levels that were impractical just a short time ago.

Figure 1: Flash capacities are on the rise.

An Entirely New Dimension
It’s no surprise, then, that 3D NAND was a prominent topic at the recent Flash Memory Summit. One flash maker there showcased a 96-layer chip—expected to ship in 2018—with a whopping 768Gb on a single chip. Another maker is expected to have a 1Tbit chip next year also. Mainstream bit capacity is now three bits per cell, or Triple Level Cell (TLC), and on its way is Quad Level Cell (QLC) at four bits per cell. But the real story here is that the flash makers are adding an entirely new dimension, literally, to flash storage.

Market analysts such as Objective Analysis project significant growth in the 3D NAND space in the coming years:

“Objective Analysis expects for NAND flash gigabyte consumption to grow 45 percent annually over the long-term. This means that six times as many flash exabytes will ship in 2021 as did in 2016. Revenues won’t rise at that rate, though, since prices are ripe for a collapse, which we expect to occur once 3D NAND begins to be manufactured efficiently. This will be triggered by a breakthrough whose timing can’t be pinpointed. Our current outlook is for that breakthrough to occur in mid-2018.”

With these new 3D approaches to manufacturing, there is a tradeoff—and we should expect some sort of compromise between endurance and capacity. After all, packing three bits per cell into the chips and then manufacturing them into a layered solution is a new concept that’s still in the improvement stage, and therefore subject to initial endurance challenges.

That’s where Virtium is focusing its efforts. While lower-endurance chips may be fine for consumer applications and possibly select enterprise storage environments, the industrial market can’t compromise endurance for the sake of capacity. The myriad customer designs using Virtium industrial-grade products over the past 20+ years all have one thing in common: They absolutely require durability and data protection, even in the most extreme environments.

Figure 2: Like the human brain, 3D NAND’s multi-dimensional form enables near-limitless capabilities.

Durability of SSDs and protection of data aren’t arbitrary requirements or the demands of overly cautious designers, either; the data collected by embedded, IIoT, and M2M applications and stored in flash devices, 3D NAND or otherwise, is oftentimes business-critical. Information is currency, and the importance of that currency highlights the distinctions between off-the-shelf SSDs and those designed and built for those business-critical applications.

More at Stake
Consumer-grade SSDs do not require the endurance and reliability of industrial-grade SSDs. Consumer-grade SSDs are sufficient for desktop and laptop computers, but industrial applications require a more robust product—with a longer product life cycle and tolerant of a wider temperature range.

Additionally, consumer-grade SSDs don’t fit the embedded and IIoT mold because they may exceed power requirements and are only available in form factors of consumer applications. On the enterprise side, the higher capacities and high IOPS, which add to higher power requirements and cost overruns, simply aren’t needed for industrial applications.

Because IIoT endpoints are usually found in harsh and/or remote environments, the SSDs used here must be able to support extreme temperatures, vibration, and shock. They also must be built with a “set it and forget it” purpose and last longer than your typical SSD. Because the applications and the critical data they collect and store simply cannot be compromised, SSDs can also be subject to monitoring and predictive analysis. So, there’s far more at stake in embedded-system, IIoT, and M2M data collection and storage applications. And those stakes serve as guiding factors as flash makers develop devices with significantly higher capacity, as we’re witnessing in 3D NAND, without compromising the durability of SSDs or protection of their data.

We at Virtium are watching closely how 3D NAND evolves—it may at some point bridge the endurance gap and be suitable for demanding IIoT applications. In the meantime, the flash chips Virtium does use for our industrial-embedded SSDs, coupled with the drive-manufacturing processes we employ, ensure customers the drives will withstand harsh environments and the data they contain will be well protected.


Scott Lawrence is Director of Business and Technology Development, Virtium Solid State Storage and Memory.

 

 

Pushing Data to the Limits

Tuesday, September 12th, 2017

Data demands are pushing traditional and cloud data centers’ limits. NVMe can remove the bottleneck and scale storage across mobile apps, client, and enterprise and data centers.

Data is everywhere and used by everyone today, from a Snapchat image to a YouTube video, to location data on personal mobile apps to files used in enterprise and business, there is a data explosion.

While everyone is trying to predict the number of devices that will be connected via the Internet of Things (IoT), we should also be looking at the amount of data the IoT will generate. Devices could generate as much as 600 ZByte each year by 2020—it was 145 ZByte in 2015.

The base for this data proliferation is also shifting. In 2015, the Cisco Global Cloud Index: Forecast and Methodology, 2015–2020 report predicted that by 2020, cloud data centers would process 92 percent of workloads, eclipsing that processed by traditional data centers. The same report predicted that data center storage installed capacity would increase nearly five-fold, in the same five-year period, rising from 382 exabytes to 1.8 zettabytes. (An EByte is 260 bytes and a ZByte is 270 bytes). The report also anticipated that data stored in data centers would grow at a similar rate, from 171 EBytes in 2015 to 915 EBytes by 2020. Big data alone, said the report, will account for 27 percent of data stored in data centers by 2020.

Data Acceleration
We all play our part in contributing to the data growth. While everyone is trying to predict the number of devices that will be connected via the Internet of Things (IoT), we should also be looking at the amount of data the IoT will generate. Devices could generate as much as 600 ZBytes each year by 2020—it was 145 ZBytes in 2015. Although not all data created is stored on the device, Cisco believes that data stored on devices will be five times the amount of data stored in data centers (5.3-ZByte by 2020).

The reason for all this data is to share it, with friends, colleagues, utilities, and customers. The connectivity has to keep up with the increased storage rates, and the low power consumption rates of mobile devices have to be factored when considering storage interface options. This is why the Non-Volatile Memory Express (NVMe) organization is working on NVMe 1.3 to address the needs of mobile devices. The NVMe consortium developed the original specification to create a scalable, flexible, broad bandwidth, and low latency, open specification for enterprise and client, NVM-based storage. Architecting it from the ground up, the working group was able to remove register sets, feature sets, and command sets to optimize performance and make it more efficient than legacy interfaces, such as the Small Computer System Interface (SCSI). The NVMe protocol allows enterprise, data center, and client systems to access Solid State Drives (SSDs) on a Peripheral Component Interconnect Express (PCIe) bus, as well as across fabrics to connect devices, networks, data centers, and cloud services.

NVMe Evolution
At this year’s Flash Memory Summit, Toshiba announced that it expects to complete development of the CM5 NVMe series of SSDs.

Figure-1-Toshiba_PM5_CM5_side_by_side

Figure 1: Toshiba’s CM5, shown alongside its PM5 Serial Attached SCSI (SAS) SSD, introduces 64-layer Flash memory for enterprise-class SSDs.

Demonstrated at the show in Santa Clara, California in August, the dual-port PCIe Gen 3 x4 SSDs are built using the 64-layer, three-bit-per-cell, Triple Level Cell (TLC) BiCS FLASHTM 3D memory stacking technology. They support multiple-stream write technology, and increase the number of Input Output Per Second (IOPS) to up to 800,000 random read and 240,000 random write IOPS for the five Drive Writes Per Day (DWPD) model and up to 220,000 random write IOPS for the three DWPD version. Maximum power draw, for both, is 18W. Capacities range from 800 GBytes to 15.36 TBytes, with Sanitize Instant Erase (SIE) and Trusted Computing Group (TCG) functions.

The company also demonstrated its BG3 series of single package, Ball Grid Array (BGA) SSDs, also built on its 64-layer TLC BiCS FLASH memory technology. The NVMe SSDs are designed for mobile devices, such as laptops and tablets, with a small footprint and a height of just 1.3mm. Integrated into the single package are a Toshiba-developed controller and firmware, with its Flash memory. For security, there are self-encrypting drive options with TCG Opal Version 2.01.

Figure 2: The small form factor BG3 SSDs use BiCS FLASH memory technology for mobile, enterprise applications.

Figure 2: The small form factor BG3 SSDs use BiCS FLASH memory technology for mobile, enterprise applications.

The design uses the NVMe specification’s Host Memory Buffer (HMB) to use the host memory to manage the Flash memory, for performance without integrated Dynamic Random Access Memory (DRAM), says the company. The SSDs have a PCIe Gen 3 x2 lane and are based on the NVMe Revision 1.2.1 architecture to deliver up to 1,520-Mbytes per second sequential read and up to 840-Mbytes per second sequential write times. They are offered in a surface-mount BGA module or a removable module and are available in 128-, 256-, and 512-GByte capacities.

Another exhibitor, Micron, introduced the 9200 series of NVMe SSDs. They are the second generation of NVMe drives from the company, and feature 3D NAND. According to the company, they deliver enterprise Flash performance that is up to 10 times faster than that of typical Serial Advanced Technology Attachment (SATA) SSDs, yet are able to conserve power and rack space with 3D NAND high density storage, with 900,000 IOPS. That’s based on 100 percent random 4-kbit read performance, compared with a typical Tier 1 data center SATA SSD’s average IOPS of 85,000, says Micron. Capacity exceeds 10 TBytes, sufficient for target applications such as online transaction processing (OLTP) in retail sales, customer relationship management (CRM) systems, high-frequency trading, and high-performance computing.

Taking Shape
Another BGA offering is from Samsung. The PM971-NVMe is claimed to be the industry’s first NVMe PCIe SSD in a single BGA package. The integrated package has 16 48-layer, 256-GBit V-NAND Flash chips, a 20-nanometer 4-Gbit Low Power Double Data Rate Gen 4 (LPDDR4) mobile DRAM, and a Samsung controller. It measures 20mm x 16mm x 1.5mm and weighs one gram, making it suitable for use in mobile devices, Personal Computers (PCs), and slim notebook PCs.

Figure 3: Samsung’s PM971-NVMe SSD is in a single BGA package for space-constrained, mobile applications.

Figure 3: Samsung’s PM971-NVMe SSD is in a single BGA package for space-constrained, mobile applications.

Sequential read speed is up to 1,500 MBps, and 900 MBps is the rate of write speeds when using the company’s proprietary TurboWrite technology to temporarily use portions of the SSD as a write buffer. Samsung puts these figures into context for mobile and consumer applications, saying it equates to transferring a five-GByte-equivalent full High Definition (HD) movie in three seconds and downloading it in six seconds. Random read IOPS are up to 190,000 and 150,000 write IOPS. As well as the 512-GByte version, the company has announced 256- and 128-GByte options.

The announcement at Flash Memory Summit follows last year’s announcement for the V-NAND-based, M.2 form factor 960 PRO and 960 EVO SSDs, built on the NVMe protocol. Both use the PCIe Gen 4 x4 lane interface. They also have the company’s Dynamic Thermal Guard technology to protect the SSD’s operation even at extreme temperatures.

IOPS for the 960 PRO are up to 3,500 MBps sequential read transfer and 2,100 MBps write transfer speeds. For random read and write operations the performance is up to 440,000 MBps and 360,000-MBps, respectively. Capacities for the 960 PRO are 1 TByte, 2 TByte, or 512 GByte.

The 960 EVO is available in 1-TByte, 250-GByte and 500-GByte capacities. Sequential read and write speeds are up to 3,200 MBps and 1,900 MBps respectively, with random read speeds of up to 380,000 IOPS and up to 360,000 IOPS for random write operations.

Shortly after Flash Memory Summit closed its doors for another year, Taiwanese company ATP announced the NVMe-based M.2 2280 SSD. The 3D NAND Multi-Layer Cell (MLC) SSD has a PCIe Gen 3 x4 lane bus interface and offers between 128-GByte to 1-TByte memory capacity. Sequential read operations are up to 1,260 MBps, and sequential write operations are up to 980,000 MBps.

Figure 4: ATP uses the M.2 form factor for its SSD, targeting mission-critical applications.

Figure 4: ATP uses the M.2 form factor for its SSD, targeting mission-critical applications.

The company emphasizes the longevity and reliability for mission-critical applications of the SSD. There is wide temperature and extreme power cycling testing, it reports, together with temperature feedback and a thermal throttling mechanism for heat dissipation. These features make it suitable for industrial, medical imaging, IoT, and surveillance applications, as well as server and networking projects.

As the NVM Express Work Group develops Version 1.3 of the specification, manufacturers will be eager to see how NVMe can be used as storage interface across all platforms, from data center storage systems to mobile devices.


hayes_caroline_115Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Where to Start with Embedded Processor Security

Wednesday, July 12th, 2017

Multicore embedded SoCs see security demands rising

The world runs on data, and every bit or byte is a potential target for attack. At the same time, both software and hardware systems are becoming much more complex, connected, and interdependent. And with complexity comes vulnerabilities. The billions or trillions of lines of code and interrelated hardware modules, subsystems, and partitions—all crammed on tiny slices of silicon—are a hacker’s delight.

AdobeStock_69746982

Of course, hackers are not standing still. Reports of vulnerabilities in embedded systems go on and on: satellite communication systems, wireless base stations, laser printers in residences and businesses, the smart electrical grid, medical devices like defibrillators, and many other systems are at risk. The need for security in multicore embedded system-on-chips (SoCs) has only increased. Embedded devices like heart equipment, smartphones, and automotive control units rely on multiple components, including embedded SoCs, to protect the control center. In this article, I’ll examine secure boot, the foundational layer of security for embedded processors. With secure boot, the system is protected from power on. Without secure boot, the system has a vulnerable gap from power on to usage.

Designers may face balancing communication throughput and security, but some embedded processors avoid this dilemma…

Architectural Considerations
Embedded security starts in hardware. Coupling software and hardware security features together enables a more secure layer of protection than either solution working independently. Vendor-provided tools can streamline the development of security subsystems and ensure that the resulting architecture meets developer requirements. For example, hardware-based security accelerators can mitigate the performance cost of a security subsystem.

Of course, the strength of a security architecture will depend on its foundation. Three aspects of the foundational layer are essential: a secure boot process, hardware-based device IDs/keys, and cryptographic acceleration.

The Security Pyramid
The security pyramid (Figure 1) illustrates the various layers and constituent parts of a comprehensive security subsystem for a multicore SoC embedded processor.

Figure 1: Security pyramid

Figure 1: Security pyramid

Secure Boot
A secure boot process establishes a root-of-trust for the embedded system. Even when initiating booting from external flash memory, a secure boot process verifies the integrity of the boot firmware through any number of mechanisms, including embedded cryptographic keys and others. A secure boot layer safeguards the system against takeover by malware, any possible cloning of the in-system intellectual property (IP), inadvertent execution of unwanted applications, and other security risks.

Secure boot also assists in providing an additional layer of protection by encrypting the IP and copying it securely to protect internal memories. Having the ability to encrypt also adds security for the code base, as it prohibits carrying out directed exploration attacks.

Cryptographic Acceleration
Cryptographic processing, which involves the generation, verification, and certification of various public and private keys, can take a toll on the performance and throughput of an embedded system. Some SoCs are equipped with hardware-based accelerators or co-processors that speed up the coding/decoding processes tremendously. Software-based acceleration is also available, but software is not as inherently secure as hardware-based cryptographic acceleration. Examples of common cryptographic functions include Advanced Encryption Standard (AES), Triple Data Encryption Algorithm (3DES), Secure Hash Algorithm (SHA), Rivest Shamir Adleman (RSA) and elliptic curve cryptography (ECC).

Debugging Security

During system development, you need access to embedded multicore processors in order to debug firmware and software and troubleshoot possible hardware problems. In most cases, access is via the Joint Test Action Group (JTAG) port. In an operating environment, the debugging port must either be sealed closed by some sort of fuse, or accessible only through certified cryptographic keys. Otherwise, the debugging port could provide an easy way into the system for hackers (Figure 2).

Figure 2: Texas Instruments MSP430™ microcontroller debugging port.

Figure 2: Texas Instruments MSP430™ microcontroller debugging port.

Trusted Execution Environment
The run-time security layer comprises several distinct capabilities that all play a part in protecting the system following the boot-up process and execution of the operating system (OS). An important aspect of run-time security is monitoring all aspects of the system to determine when an intrusion has either occurred or been attempted. Trusted execution environment (TEE) security (Figure 3) enables a system to host secure and nonsecure applications concurrently and maintain a partition through the system such that there is no leak of data.

A TEE essentially provides a secured partition within a multicore system that enables the execution of certified secure firmware, software, and applications only, and the storage of certified data only. Walling off the TEE from the rest of the multicore/multiprocessing system prevents suspect code, applications, and data that may pass through the system from contaminating mission-critical software, data, and other IP.

Figure 3: A trusted execution environment.

Figure 3: A trusted execution environment.

Secure Storage
Cryptographic keys and security data must be stored in system memory in locations that are impervious to unwanted access. A number of capabilities can provide secure storage, including encrypted Binary Large Object (BLOB) keys, anti-tamper protection that only a master key can unlock, a private key bus between nonvolatile memory and cryptographic engines, and others.

External Memory Protection
When adding another application or subsystem, you usually have to add memory that is external to the main processor and connect to it through a memory bus. You must protect the data stored in external memory against tampering or replacement to ensure that external memory only stores trusted data or application code. A number of methods can safeguard the contents of external memory, including secured execute-in-place directly from external memory (without loading data into the processor’s integrated memory) and decrypt on the fly, which can maintain confidentiality while enabling applications to run on the main processor.

Networking Security
Hackers are quite adept at intercepting wireless and wired network communications. In fact, some communication protocols have known security weaknesses that hackers have exploited. Deploying only highly secure communication protocols often involves a significant number of processing cycles to encrypt and decrypt the communication stream, as well as to verify the authenticity of the sender or receiver. Designers may face balancing communication throughput and security, but some embedded processors avoid this dilemma by integrating hardware-based accelerators for the cryptographic algorithms used in conjunction with standard communication protocols.

Physical Security and Tamper Protection
Sophisticated and not-so-sophisticated hacking organizations have been known to remove chips from a system or a silicon die from a chip package to access embedded assets (Figure 4). Once the device or die has been removed, hackers can bombard the devices with lasers, power them up beyond their specified power limits, or employ other means. Their objective: Observe how the device reacts to stimulus because responses may betray vulnerabilities that hackers can then exploit to access the device. Tamper-protection modules integrated into embedded multicore processors can contain power and temperature monitors, reset functionality, frequency monitors, and programmable tamper-protection capabilities.

 Figure 4: An example of a device under physical attack.

Figure 4: An example of a device under physical attack.

Enclosure Protection
Enclosure protection features are physical measures that safeguard the enclosure encasing a system. These features can range from locking mechanisms to electronic switches, break-away wire-tripping mechanisms, and others (Figure 5).

Figure 5: Enclosure protection.

Figure 5: Enclosure protection.

Where to Start with Embedded Security?
The fundamental basis for the security of an embedded multicore processor begins in hardware. If the hardware is not secure, no amount of security software will assist in making it so. Assuming security features are already built into the hardware, the first place to look to begin building a security subsystem is in the first software that will execute following power up: the boot code. If you cannot authenticate the booting process, then no other software running on the system can either. So, securing the boot process is the fulcrum upon which all of the security in the system depends. A secure boot process establishes the root-of- trust, which is the goal of every security subsystem.

Usually, a secure boot process involves programming a public cryptographic key into nonvolatile, one-time-programmable memory somewhere in the system. This public key must match private/public keys associated with the boot code to authenticate the validity of encrypted boot code before execution begins. Booting firmware can either be loaded into the embedded processor’s random access memory (RAM) or, for added security, secured and executed in place out of memory external to the embedded processor. Some firmware images consist of various components or modules. Requiring authentication before decrypting and executing each module enhances boot security. Examples of embedded processors that offer a secure boot process can be found in the Sitara™ AM43x, AM335x, and AM57x processor families from Texas Instruments.

Embedded processor security is a multifaceted, complex subject. With the ascent of the Internet of Things (IoT) and the ubiquity of embedded systems, hackers—now more than ever—have an abundance of prime targets.


amrit_headshot_2Amrit Mundra is a Security Architect and Systems Engineer in TI’s Processor group. He is responsible for the definition of end-to-end security for TI’s Processor platform that includes single and multicore processors devices with ARM and DSP cores. In the past, Amrit has architected and implemented various cryptographic cores, IPSEC engines, security sub-systems and has lately defined the security for TI’s point-of-sale device that involves securing the device to meet payment security standards.

Q&A with Ilika CEO Graeme Purdy

Thursday, June 1st, 2017

A battery breakthrough that’s not about setting the world on fire

Materials and battery technology company Ilika is targeting such Industrial IoT sectors as factory automation, automotive, transportation, and more with a high-temperature battery suitable for hostile environments, the Stereax P180.

Materials and battery technology company Ilika is targeting such Industrial IoT sectors as factory automation, automotive, transportation, and more with a high-temperature battery suitable for hostile environments, the Stereax P180.

Around a decade ago Toyota sought know-how in materials and solid-state battery technology to help it transition away from flammable lithium-ion batteries in Prius automobiles. During our EECatalog conversation with Ilika CEO Graeme Purdy, following the company’s April announcement of its Stereax P180 extended temperature range solid state battery, he noted what happened after Toyota turned to Ilika:  “We identified the material sets that were suitable,” Purdy told us. That’s an understated way of saying what an individual (or company) focused more on attention than solutions might have worded differently. “Whooh!! We discovered what materials make avoiding flammability possible and had the foresight to apply the technology to miniature batteries, like those which Industrial IoT sensors would come to use, too!” would be another way to put it. With that move, Ilika set itself on the path to debuting solutions such as the Stereax P180. Edited excerpts of our interview with Purdy follow.

EECatalog: How much of an opportunity does the Industrial IoT represent, and what makes it possible for devices targeting that market to take advantage of the opportunity?

Graeme Purdy, CEO, Ilika

Graeme Purdy, CEO, Ilika

Graeme Purdy, Ilika: In China, for example, the Industrial IoT is growing faster than the consumer IoT. They’ve got the factories that they are looking to improve the efficiency and productivity of — fertile ground for automation and information gathering.

But to address that market, you need to come up with a device that meets a series of quite demanding requirements. First, you need small unobtrusive beacons that you can retrofit in what are sometimes rather inaccessible places. That device also needs to have a standard industrial temperature applicability. It needs to conform to the concept of “fit and forget.” You install the device, and it must be operational for a decent period of time Our rule of thumb is that, typically, you want these devices to last for 10 years.

EECatalog: Longevity, and therefore lower Total Cost of Ownership (TCO)  is one of the benefits of a collaborative project which Ilika is participating in—one which combines your solid state battery technology with an energy-harvesting solution.

Purdy, Ilika: When you have to deploy your maintenance crew to change out batteries, the cost of that labor can make the total cost of ownership rather unappealing. For that collaboration, we use a single silicon substrate. We put down a layer of battery material, and then we put down the photovoltaic or solar panel on top of that. So, you’ve got an integrated single component which is like an energy brick, where you’ve got both your means of harvesting energy and storing it in one component (Figure 1).

Figure 1: A battery with the means of both storing and harvesting energy.

Figure 1: A battery with the means of both storing and harvesting energy.

[This approach] reduces cost, complexity, and the scale at which your systems must be built. A number of medical applications, where size, particularly for implants, may be one of the overriding considerations, would find this of interest. And of course, running cabling is either expensive or not practical in certain instances, particularly where you may have mobile devices or assets that you are looking to gather data from.

EECatalog: And for cars, cabling weight becomes an issue.

Purdy, Ilika: Right. We are seeing a trend for increased electronics in automotive to the point where a modern vehicle has got about 100 sensors built into it, and at the moment, nearly all of those sensors are hardwired. The weight of cabling in a modern car has increased between 60 and 100 kilos, depending on the model. We are now moving toward autonomous vehicles where you need in the order of 1000 sensors to get them to function properly. What we can’t afford to do is increase the weight of the cabling up to a metric ton. Automotive companies are looking at lighter weight, distributed energy management systems where you can get the sensors to function independently of the central power source.

EECatalog: Where else do you see enterprises benefitting from storage which is small size, high capacity and able to handle temperature extremes?

Purdy, Ilika: Aerospace is a similar analog [to automotive] for that opportunity, except that you get more extremes of temperature. So, you get very low temperatures at altitude, and when the airplane is parked up at the gate you get heat soak back from the engines. You need a sensor system that will be robust against these extremes.

And then infrastructure is also an interesting opportunity, particularly in the developed world, where we have quite a lot of infrastructure heritage, often bridges and roads that were built 100 years ago sometimes in extreme environments. Bridges must remain stable in earthquake zones and where you don’t want to call maintenance crews to the tops of the towers of suspension bridges regularly to change out batteries. Systems have to be self-powering and robust.

EECatalog: Which battery technology issues should be more prominent as the Industrial IoT braces for substantial growth, and which less so?

Purdy, Ilika: One of the big drivers in the battery industry has been to reduce cost, and I think that has been achieved very effectively over the past 10 years or so—you have seen a relentless driving down of cost. I saw some figures the other day that costs have fallen 50 percent every five years over the last 10 years or so and are likely to fall further in the coming few years, largely because there have been some massive investments in battery production—the Tesla Gigafactory in Nevada is an example of that. And that has meant that battery technology has been able to be deployed within consumer electronics, and it may well open the door to larger scale batteries being ready for mass adoptions in other [sectors] for instance, Electric Vehicle and off-grid storage markets.

From the technology perspective, a lot of work has been done on cathode materials because that is how you define the capacity of the battery—it has been the limiting component. Organizations have been keen to announce increasingly large amounts of storage capacity, so that has been a key driver.

I think flammability has been neglected but, increasingly as people look to deploy batteries in transport applications, where flammability is more critical, there has been an increased deployment of resources there.

EECatalog: How can the features of storage technology you are describing be leveraged for design wins?

Purdy, Ilika: We’ve got a technology that complies with industrial standards, and even if the actual devices don’t always get used at the extremes of the temperature range that those standards are designed for, at least you get compliance. With that compliance developers can say, “I can group this component together along with other electronic components that are compliant. And therefore I can confirm that the whole design as an integrated solution meets the needs of what the industrial standards set out.” That’s an important step forward.

[It’s also key] that this technology is available under a license. It’s a fabless offering. Like many organizations in the semiconductor space, we see this as being a device which would be made in pre-qualified foundries. If an OEM designed a product around this battery technology, we would transfer the technology through to the foundry. The foundry would then make it in the quantity that was required, aggregating it with demand from other customers to meet the market needs. So it would need to be integrated into designs which have sufficient volume to justify a production run, and I think that is probably the most economical way that we can make this available.

EECatalog: Why is the solid-state battery market one where there are both green and brown field opportunities?

Purdy, Ilika: The temperature range is one we haven’t seen in battery technology before. A typical rechargeable lithium-ion battery will go to 60 ºC or 70 ºC, and the reason they can’t go higher than that is that they’ve got this organic liquid electrolyte in them which starts to evaporate and causes swelling in the battery and ultimately battery failure. It’s also flammable, and so it leads to battery explosions. There has been reticence to use lithium-ion technology in some industrial environments because of that. The availability of nonflammable solid state ceramic components creates the opportunity to have an inexpensive solution for monitoring in a fabrication line, for example.

Green field [opportunities] are probably more in automotive and aerospace where people are asking, “What is our car going to look like in 10 years’ time when we’ve got all this automation and electronics?”

Motorsport has always been an early adopter of new technology. [They know] they have loads of sensors now in Formula 1 vehicles to inform the driver of racing conditions. And they are interested in   wireless technology that could be enabled with this type of battery technology that could allow them to perhaps do stuff in next season’s competitions they were not able to do earlier. You often see that cascade of technology from the high-end motorsports technologies through to mainstream.

EECatalog: What steps have you taken to avoid supply source issues?

Purdy, Ilika: We use readily available materials. We screened out expensive stuff, so there are no rare earth elements in these batteries. They use standard cathode material, so we can use a range of different cathode materials: lithium cobalt oxide is used quite a lot in solid state batteries and of course in normal lithium-ion batteries as well, as the cathode.

One of the things that makes this solid-state battery different is that we use a silicon anode, and of course silicon is very cheap and widely available. It makes a robust solid-state battery if engineered in the way that we have designed these. And the encapsulation relies on well-developed techniques that have been used in the OLED industry for barrier layers. We are combining elements and materials that are quite widely available but in a unique way that gives this battery performance.

SLC NAND: Secrets Exposed

Wednesday, January 11th, 2017

Why SLC NAND endurance has altered and what this change means for high reliability embedded storage going forward.

Single Level Cell (SLC) NAND flash is no longer the stuff of headlines. Consumer markets are chasing the latest nodes and densities in Multi Level Cell (MLC), Tri-Level Cell (TLC), or the up and coming 3D NAND memories, leaving SLC NAND to the smaller “high reliability” market. However, in the world of embedded systems where product life cycles are measured in decades rather than years, SLC NAND is still in heavy use. Despite continued use in applications requiring long life or high-reliability solutions, NAND manufacturers have quietly made changes to their SLC NAND offerings that have slowly decreased the endurance of SLC NAND.

A Completely Different Story

The SLC NAND being manufactured today is not the same as the SLC NAND that was available even a few years ago. While today’s SLC NAND still has higher endurance than any other NAND technologies manufactured today, the endurance is significantly less than the 5x nm and larger SLC NAND of yesteryear. The 5x nm and larger SLC NAND devices need very little management to be reliable in an embedded system. Simple single bit error correcting algorithms more than suffice to make 5x nm and larger SLC NAND useable in most applications. A little bit of management and redundancy can make an embedded system with this NAND practically impervious to flash wear out, and even industry experts view wear-leveling as a “plus” in 5x nm SLC applications. Today’s SLC NAND devices, however, are a completely different story, and the industry has downplayed or overlooked the changes to SLC NAND in favor of its flashier cousins.

Figure_1

Figure 1: As NAND transitioned to 4x/3x nm technology, the endurance dropped to 70,000 and fewer program/erase cycles per cell.

Along with DDR RAM, NAND flash has been a driver of lithographic process node scaling. When NAND was introduced in 1984, the size of the process node was 0.7μm and endurance was 100,000 or more program/erase cycles. This endurance was maintained through the 5x nm process nodes, but as it transitioned to 4x/3x nm technology, the endurance dropped to 70,000 and fewer program/erase cycles per cell (Figure 1). Current leading NAND lithographies are less than 2x nm. A closer look at NAND flash technology shows that even users of SLC NAND need to be concerned about decreasing NAND endurance, especially in embedded or high reliability applications.

The number of program/erase cycles a cell can endure before the erased state of cell is no longer discernible from the programmed state determines NAND flash endurance. However, in practical application it is determined by how long a flash cell can be used before unrecoverable data corruption occurs. There are several methods of data corruption, all of which are impacted by shrinking process lithographies.

Flash Memory Construction

A single bit of flash memory is constructed of a transistor with a floating gate. The floating gate can be used to store electrons for an extended time. Electrons get to the floating gate by tunneling through to the thin oxide layer that isolates the floating gate (Figure 2). This tunneling effect is created when a large gate voltage is applied to the device. The gate voltage creates a field in the channel, increasing the energy of the electrons and causing some of them to tunnel through the thin oxide layer. The charge can be removed from the floating gate by reversing the gate voltage and pushing the electrons back through the thin oxide layer.

Figure 2: Storage via floating gate.

Figure 2: Storage via floating gate.

When electrons are stored on the floating gate, the threshold voltage, or the gate voltage where the transistor begins to conduct, changes. If there are no electrons on the gate, then the transistor acts like a normal MOSFET. When electrons are stored on the floating gate, their negative charge shields the conductive channel from the gate and prevents or limits the current flow from the source to the drain. This change in the threshold voltage modulates the current/voltage characteristics of the cell, so the status of the floating gate can be read by simply applying a voltage to the terminals and measuring the resulting current.

As the NAND flash lithography nodes are scaled down, the number of electrons available to move to the floating gate decreases. This is a well discussed fact for MLC NAND, but the same physics applies directly to SLC flash as well. In smaller lithographies, a small change in the number of electrons on the floating gate can dramatically affect the threshold voltage (Figure 3). With each reduction in NAND flash lithography, it becomes very difficult to achieve the same performance and endurance of the previous process node. The reduced number of electrons available makes smaller lithography devices even more susceptible to threshold voltage shifts caused by damage, leakage or disruptions.

Figure 3: Any change in charge will affect the threshold voltage of a cell.

Figure 3: Any change in charge will affect the threshold voltage of a cell.

The gate threshold voltage is variable regardless of what lithography a NAND cell is made on. As it can be affected by a number of factors, it is typically expressed as a statistical distribution. The statistical distribution of the threshold voltage on new flash defines the difference between a programmed and an erased cell. The voltages used in programing a NAND flash cell slowly damage the thin oxide layer that isolates the floating gate, allowing more charge to be trapped on the floating gate. At any geometry, this damage will accumulate over time, narrowing the gap between the threshold regions and pushing the threshold voltage of an erased cell over the detection threshold used to detect the programmed state (Figures 4, 5). At smaller lithographies the geometries used in the NAND construction are even smaller, resulting in faster wear out and lower endurance. In an embedded system this means that the same software application can wear out newer SLC NAND at much faster rates than ever before.

Another source of variability in the gate threshold voltage is accidental charge collection. Flash cells are structured in vast arrays, with each cell packed tightly with its neighbors. Programming or reading a cell will apply elevated voltage stress to all the neighboring cells. Occasionally this stress will cause electrons to accidentally tunnel up onto the floating gates of the neighbor cells. As any change in charge will affect the threshold voltage of a cell, these accidental electrons can cause a cell to appear programmed when it should be erased. Luckily the acquisition of accidental electrons does not wear out the oxide layer, and the electrons are easily removed with an erase of the flash. With shrinking lithographies bringing individual cells closer to each other, the chances of accidental charge collection are much higher.

Figure 4: Attaining the same performance and endurance of the prior process node becomes more difficult with each reduction in NAND flash lithography.

Figure 4: Attaining the same performance and endurance of the prior process node becomes more difficult with each reduction in NAND flash lithography.

Figure 5: At smaller lithographies the geometries used in the NAND construction are even smaller, resulting in faster wear out and lower endurance.

Figure 5: At smaller lithographies the geometries used in the NAND construction are even smaller, resulting in faster wear out and lower endurance.

Error Correcting Techniques

Accidental charge collection from a neighboring NAND cell being programed or read can be detected by using error correcting techniques. There are several error correction algorithms used with NAND flash, but all of them entail calculating and storing an extra value that allows an error to be detected, known as the Error Correcting Code (ECC). By utilizing the error correction algorithm as data is read back from the NAND flash, program or read disruptions in NAND cells can be detected. In addition to detecting the error, a small enough error can also be corrected. If the error is correctable, the affected NAND cell can be erased and reprogrammed again with the correct data and remain useful in an embedded system for many more program/erase cycles. Smaller lithography devices require more bits of ECC in order to compensate for the increased likelihood of a disruption due to the smaller geometries (Figure 6). A higher number of ECC bits required for a NAND flash device is frequently the best indicator that a smaller lithography is in use, and that the overall endurance of the device has also decreased.

SLC NAND manufacturers have been quietly rolling out smaller lithography devices with little comment about the decreasing endurance of the devices. SLC NAND flash users are being forced to transition to smaller lithography NAND flash as larger lithography devices are no longer available on the market. Embedded systems’ need for high reliability storage hasn’t changed, but suddenly the endurance of the SLC NAND has. The days of inherently reliable SLC NAND have silently slipped away leaving many embedded systems stuck with NAND flash that no longer endures under the same real life applications. How will the market adjust to address the needs of the end customers? Technologic Systems will be addressing it in its products by offering a state of the art SLC NAND management solution. For details about this new NAND management layer please read our white paper “XNAND2: NAND Device Driver for Today’s Lower Endurance SLC NAND”.

Figure 6: Often the clearest sign that a smaller lithography is being employed is the greater number of ECC bits a NAND flash device demands.

Figure 6: Often the clearest sign that a smaller lithography is being employed is the greater number of ECC bits a NAND flash device demands.



References




ENelson_webEliza Nelson is a Hardware Design Engineer at Technologic Systems.

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.