Does Secure Erase Actually Work?

Chris A. Ciufo, Editor, Embedded Systems Engineering

In this Part 2 of 2, I examine the subject of using the flash manufacturer’s secure erase feature—since so many DoD documents recommend it.

In Part 1 of this blog (“How Does One “Zeroize” Flash Devices?”), I set about finding DoD recommendations for “zeroizing” (or “sanitizing”) sensitive data from flash memory, including flash-based solid state disks (SSDs). Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Note, unless otherwise specified by context, I use the terms “flash”, “flash memory” and “solid state disks” or “SSDs” interchangeably since the results are similar.  SSDs are made up of flash memory plus control, wear and sometimes encryption logic.

Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command, which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Flash in Freefall

In 2010 MLC flash memory hit stride and became cheap and ubiquitous, making security an issue. According to data tracked by John C. McCallum, in 2004 the price per MB of flash ($/MB) was around $0.22; it dropped to $0.01 in 2006 and then hit ~$0.002 in 2009. That is: it dropped about 20x from 2004 to 2006 and another order of magnitude in the next 2-3 years. By 2010, flash moved from expensive boot memory and cheesy 128MB freebie USB sticks to a credible high-density media that would challenge mechanical (rotating media) HDDs.

Computer-ready SSDs arrived on the scene around this time. They were crazy fast, moderately dense, but way more expensive than hard disks of the same size. The speed made them compelling. And it became obvious that important data could be stored on SSDs, but data security would eventually become important. As well, flash stores data differently than magnetic drives and requires a built-in wear-leveling algorithm to assure even “wear out” across internal memory blocks. These issues taken together catalyzed the industry to make recommendations for securely erasing devices to assure data was really gone when a file was deleted.

Industry Recommendations

Let’s start with the recommendations made by industry presented at the Flash Memory Summit in 2010, about the time flash was gaining serious traction. As presented by Jack Winters, CTO of Foremay, numerous industries—including defense—needed a way to securely erase sensitive data stored in SSDs and flash memories. It is not acceptable to delete or reformat an SSD because the data would remain intact. The only way to successfully erase is to “overwrite all user data in allocated blocks, file tables and…in reallocated defective blocks,” said Mr. Winters at the time. Figure 1 represents a summary of the three types of ATA Secure Erase methods.

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Type 1 software-based SE requires a user’s input via keyboard and utilizes a combination of SE command processor, flash bus controller, (ATA) host interface and the device’s sanitize command. The device’s bad block table is erased, rendering the device (or the entire SSD using the flash components) useless for reuse. Type II is a hybrid of software and hardware kicked off by an external line such as GPIO, but logic erases the device(s) to allow flash reuse once the drive is reformatted. For defense customers, it’s unclear to me if Type 1 or Type II is better—the point is to sanitize the data. Reusing the drive, no matter how expensive the drive, is of secondary concern.

Finally, Mr. Winters points out that Type III SE kicks off via external GPIO but involves a high voltage generator along with the controller to destroy the NAND flash transistors within seconds. The drive is not useable—ever—after a “purge”; it’s completely ruined. Note that this kind of erasure isn’t mentioned in the NSA’s “mechanical pulverization” sanitization procedures, and it’s unclear if Type III would meet the NSA’s guidelines for data removal.

These recommended SE procedures for flash made me wonder if the techniques applied to rotating HDDs would also work on SSDs, or if some users might think they are effective at securely sanitizing sensitive data stored on SSDs. After all, if the DoD/NSA recommendations are ambiguous…might users be misapplying them?

Refereed Research: Reliably Erasing SSDs?

An oft-sited refereed paper on the subject of SE appeared in 2011 “Reliably Erasing Data From Flash-Based Solid State Drives”, written by Michael Wei et al. (Note: the 30-minute video of his paper can be found here.) Mr. Wei’s team at UCSD reached three key conclusions:

  • Built-in SE commands are effective…but manufacturers sometimes implement them incorrectly (my emphasis).
  • Overwriting twice is usually, but not always, sufficient.
  • None of the existing HDD techniques for individual file sanitization are effective on SSDs.

This latter point is important: SSDs store data differently than HDDs and therefore require flash-based SE procedures, like the ones described above. According to Wei “the ATA and SCSI command sets [for HDDs] include “secure erase” commands that should sanitize an entire [HDD] disk.” But they don’t work on SSDs. SSDs direct data to raw flash data locations using a logical block address, sort of like a look-up table called the Flash Translation Layer (FTL). This is done for a variety of reasons, from improving speed and wear-out endurance, to “hiding the flash memory’s idiosyncratic interface,” says Wei.

Wei and his colleagues investigated the ATA sanitization commands, software techniques to sanitize drives, and common software to securely erase individual files. Researchers dug deeply into the memories using forensic techniques—which is not unlike what a determined adversary might do when trying to extract classified data from a recovered DoD or military SSD.

Cutting to the chase for the sake of brevity, Wei discovered that trying to sanitize individual files “consistently fail[s]” to remove data. As well, software sanitizing techniques—not built into the drives—are usually successful at the entire drive level, but the overwrite pattern may provide clues to the data or “may impact the effectiveness of overwriting.”

In fact, one of my colleagues from Mercury Computer’s Secure Memory Group (formerly Microsemi) told me that knowing the nature of the original data set provides some clues about that data merely by examining the overwrite patterns. It’s third-order deeply technical stuff, but it all points to the need for built-in flash SE circuitry and algorithms.

Another key point from Wei and his colleagues is that retrieving non- or poorly-sanitized data from SSDs is “relatively easy,” and can be done using tools and disassembly costing under $1,000 (in 2011). Comparable tools to recover erased files from rotating media HDDs was over $250,000. This points to the need for proper SE on SSDs.

Doing It Wrong…and Write

For SSDs, SE is based on the ATA Security “Erase Unit” (ATA-3) command originally written for HDDs in 1995 that “erases all user-accessible areas on the drive” by writing 1’s or 0’s into locations. There is also an Enhanced Erase Unit command that allows the flash vendor to write the best pattern onto the flash devices (and hence the overall SSD) that will render the device or drive “sanitized.” Neither of these commands specifically writes to non-user accessible locations, even though flash devices (and hence SSDs) may contain up to 20 to 50 percent more logic cells for storage, speed, and write endurance purposes. Finally, some drives contain a block erase command that performs sanitizing including non-user accessible locations.

Wei et al’s 2011 data is shown in Figure 2. Clearly, this data is now five years old and the reader needs to keep that in mind. The disturbing trend at the time of the research was that of 12 drives tested, 7 didn’t implement the Enhanced SE function, only one self-encrypted the data (which is a good thing), 3 drives executed SE, but data actually remained on the drive. And drive B reported a successful SE but Wei found that “all the data remained intact” (Wei’s emphasis, not mine).

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Recommendations for Sanitizing

The results shown in Figure 2 prompt the following recommendation from the researchers:

The wide variance among the drives leads us to conclude that each implementation of the security commands must be individually tested before it can be trusted to properly sanitize the drive.

Since these results were published in 2011, the industry has seen many changes as flash memory (and SSD density) increase while prices have fallen. Today, drive manufacturers are cognizant of the need for SE and companies like Kingston even reference Wei et al’s paper, clearly telling their users that the SE commands implemented by the drives must be verified. Kingston states “Kingston SSDNow drives support the ATA Security Command for proper data sanitization and destruction.”

My opinion, after reading piles of data on this topic, is exactly what Wei recommended in 2011: users with sensitive data wishing to sanitize a drive can rely on the ATA Secure Erase command—as long as it’s correctly implemented. To me that means users should test their chosen drive(s) to do their own verification that data is actually gone. When you find a vendor that meets your needs, put their drive under Source Control Drawing and Revision Control and stick with your Approved Vendor List. Buying a different drive might leave your data open to anyone’s interpretation.

Technology, Philosophy, and Kitty Litter: An Interview with VITA’s Ray Alderman

By: Chris A. Ciufo, Editor, Embedded Systems Engineering

Chairman of the Board, Ray Alderman, presents a unique view of how embedded companies compete, thrive and die in the COTS market.

One never knows what Ray Alderman is going to say, only that it’s going to be interesting.  As Chairman of the Board of VITA (and former Executive Director), Ray is a colorful character. We caught up with him to discuss a recent white paper he wrote entitled: “RAW – How This Embedded Board and Systems Business Works.” We posed a series of questions to Ray about his musings; edited excerpts follow.

Chris “C2” Ciufo: Ray, you reference the Boston Consulting Group matrix that places companies in four quadrants, arguing that most of the companies in our embedded COTS industry are Low Volume (LV)/High Margin (HM) “Niche” players. The place not to be is the LV/LM “Graveyard”—right where technologies like ISA, S-100, Multibus and PCI Gen 2 are. But…PCI Express?

RayAldermanRay Alderman: I was careful to say “PCI Express Gen 2.” That’s because Gen 3 is on our doorstep, and then there will be Gen 4, and so on. Gen 2 will be EOL [end of life] before too long. The niche players in our market—all embedded boards, not just VME/VPX—rarely take leadership in mainstream technology. That position is reserved for the four companies that control 75% of the commercial embedded market segment, or $1.5 billion. They are ADLINK, Advantech, congatec, and Kontron: these guys get the inside track with technology innovators like Intel and Nvidia; they’ll have PCIe Gen 4 product ready to ship before the niche players even have the advanced specs. Everyone else has to find other ways to compete.

C2: You said that “in the history of this industry, no company has ever reached $1 billion in sales” because as the volumes go up, customers shift to contract manufacturers to lower their prices. Only three companies ever came close to the HV/LM quadrant. Who were they?

Ray: Advantech, Kontron and Motorola Computer Group (MCG). MCG, you’ll recall, was amalgamated with Force when sold by Solectron, and then morphed into Emerson Computer Group. MCG damn near ruled the VME business back then, but as my model points out—it was unsustainable. Advantech and Kontron are still around, although Kontron is going through some—ahem!—realignment right now. My model and predictions still hold true.

C2: What’s causing this growth-to-bust cycle in the embedded market? Not all markets experience this kind of bell curve: many keep rising beyond our event horizon.

Ray: Since about 1989, the companies that had to sell out or went out of business made one of two basic mistakes: (1) they entered into a commodity market and could not drive their costs down fast enough, or (2) they entered a niche market with a commodity strategy and the volumes never materialized.

I’ve been saying this for a while—it’s practically “Alderman’s Law”—but our military embedded board and system merchant market (all form factors) is about $1.2 billion. The cat litter market in the U.S. is about $1.8 billion, and their product is infinitely less complicated.

C2: Wait—are you really comparing kitty litter to embedded technology?

Ray: By contrast. Cat litter margins are low, volumes are high and they use a complex distribution system to get the litter to cats. Our margins are high, our volumes are low, and we deal direct with the users. The top three companies in the military segment—Abaco [formerly GE Intelligent Platforms], Curtiss-Wright Defense Solutions and Mercury—total up to about $750 million. They’re around $200 million each. They add intellectual value and enjoy high GPM [gross profit margin].

On the other hand, the commercial embedded board market for telecom, industrial, commercial and transportation totals to about $2.0 billion. Using kitty logic, the dry cat food market in the U.S. is about $3.8 billion. Their margins are low, volumes are high, and they use a complex distribution system. The players in the commercial board market have low margins, low volumes (compared to other segments), and sell directly to end users. It’s a terrible place to be. Kitty litter or cat food?

C2: What’s your advice?

Ray: I’m advocating for the military market, where margins are higher. About 61% of the military embedded board/system market is controlled by the three vendors, $750 million. The remaining $450 million (39%) is shared by many small niche vendors: nice, profitable niches. Several smaller companies do $30-50 million in this segment.  In contrast, only four companies control 75% of the commercial embedded boards market, or roughly $1.5 billion. That leaves a mere $500 million (25%) for all of the other smaller companies. Thus there are not many fairly large or profitable niches for these smaller guys—and not many of them do more than $10-15 million. Kitty litter, anyone?

C2: Can you offer some specific advice for board vendors?

Ray: There are only three values you can add to make money in these markets: manufacturing value, service value, and intellectual value. Adding intellectual value is where you add high-level technical skills that other companies do not have. Examples: high speed A-to-D boards where companies like Mercury and Pentek live. You can also add DSPs with unique IP inside. Again, Mercury and Pentek come to mind. In fact, Mercury (then Mercury Computer Systems) proved this model nicely when they invented the RACEway inter-board link and created ASICs to implement it. If you want to raise your GPM, this is how you do it.

In fact, Mercury is still doing it. They bought Echotek some years ago for their I/O boards and just recently bought three divisions of Microsemi. With this latest acquisition, they gain secure storage memories, crypto IP, and a bunch of RF capabilities to add to their existing RF portfolio. Today, RF technology is “magical” and Mercury will be able to charge accordingly for it to maximize their GPM.  Most of the embedded board military suppliers add their value to the market through intellectual value. It makes the most sense.

C2: Is the recipe for success merely targeting niche markets and adding intellectual value?

Ray: I’ll let you in on a little secret. The margin on boards is much higher than the margin on systems. It’s ironic, because every board guy seems to want to get into the systems business, and there have been lots of M&A [mergers and acquisitions] over the past several years. If you’re going to do systems, you’ve got to raise the price, especially if you’re selling air-cooled [convection] systems. Conduction-cooled systems command a higher price, but they’re harder to design.

You also need to choose the niche carefully, but that goes without saying. If you can add intellectual value to your niche—such as high performance GPGPU processing—you can command higher prices, whether at the board- or systems level.

There are only three ways to be successful in the embedded boards and systems business. Be first, be smarter, or cheat. Let me explain.

Being first is usually relegated to the big guys, like Abaco, Curtiss-Wright, or Mercury. They get access to the latest semiconductor technology, which is a fundamental driver in all of our markets. Examples here would be in-advance knowledge of Intel’s Kaby Lake follow-on to the Skylake Core i7 processor, or Nvidia’s plans for their next GPU. The smaller board vendors won’t get access, so they usually can’t be first.

One other thing, the big guys can also adapt a market to them. That is, they have enough influence that they can actually move an entire market. The smaller guys just have to find other ways.

But they can be smarter. Force Computer couldn’t (at the time) beat Motorola’s Computer Group because Motorola was inventing the 68xxx processors back then. So Force switched to the SPARC processor and built a successful business around it.  In effect, Force adapted to a market that was hungry for processing power—it didn’t have to be 68020 or 68040 processing power. [Editor’s note: in fact, the 68040 wasn’t successful because Motorola themselves introduced their PowerPC processor to the market, which was co-developed with IBM. The market moved away from the 68xxx CISC processor to the PPC60x RISC processor; the rest is “history.”]

C2: And lastly, how should companies “cheat” to win?

Ray: It’s hard to cheat in the open market, against big entrenched players. The best way to cheat is to fragment an existing market. Sun Tzu called this the “Divisional” strategy. Companies can create a niche such as by creating an open standard for your version of a board or system architecture. Creating a niche is like being smarter, but is marketing-based instead of being engineering-based.

At VITA/VSO, the policies and procedures allow any company, along with two other sponsors, to write a new standard without interference. There are countless examples of this within VITA, and many of these “fragmented niches” have become successful standards that we use today, including FMC, PMC, and XMC [mezzanine cards]. Older standards like Greenspring [mezzanine modules] were successful but now mostly obsolete. There are other new standards such as the three for rugged small form factors [VITA 73, 74, 75]. And the various OpenVPX profiles are other examples, such as new “Space VPX” and “Space VPX Lite”.

C2: Any last thoughts?

Ray: As Albert Einstein once said, “We cannot solve problems by using the same kind of thinking we used when we created them.” My point: look to new architectures beyond von Neumann’s architecture that the semiconductor guys keep forcing on us. Consider fiber interconnects as a way to get off the copper-trace technology curve. Create a niche—“cheat” if you have to. Just don’t end up following a kitty litter business strategy, else you’ll be taken out with the trash.

How Does One “Zeroize” Flash Devices?

By Chris A. Ciufo, Editor Embedded Systems Engineering

Editor’s Note: This is Part 1 of a two-part article on the topic of securely erasing data in flash devices such as memories and SSDs. In Part 2, I examine the built-in flash secure erase feature intended to eradicate sensitive data and see if it meets DoD and NIST specifications.

I was recently asked the question of how to go about “zeroizing” flash memory and SSDs. I had incorrectly assumed there was a single government specification that clearly spelled out the procedure(s). Here’s what several hours of research revealed:

DoD has no current spec that I could find besides DoD 5220.22-M “National Industrial Security Program[1]. This 2006 document prefaced by the Under Secretary of Defense cancels a previous 1995 recommendation and discusses some pretty specific procedures for handling classified information. After all, the only reason to sanitize or zeroize flash memory is to eradicate classified information like data, crypto keys, or operating programs (software). The document makes reference to media—including removable media (presumably discs, CDs and USB drives at that time)—and the need to sanitize classified data. However, I was unable to identify a procedure for sanitizing the media.

There is, however, a reference to NIST document 800-88Guidelines for Media Sanitization” published in DRAFT form in 2012. A long document that goes into extensive detail on types of media and the human chain of command on handling classified data, Appendix A provides lengthy tables on how to sanitize different media. Table A-8 deals with flash memory and lists the following steps (Figure 1):

-       Clear: 1. Overwrite the data “using organizationally approved and validated overwriting technologies/methods/tools” and at least one pass through by writing zeros into all locations. 2. Leverage the “non-enhanced” ATA Secure Erase feature built into the device, if supported.

-       Purge: 1. Use the ATA sanitize command via a) block erase and b) Cryptographic Erase (aka “sanitize crypto scramble”). One can optionally apply the block erase command after the sanitize command. 2. Apply ATA Secure Erase command, but the built-in (if available) sanitize command is preferred. 3. Use the “Cryptographic Erase through TCG Opal SSC or Enterprise SSC”—which relies on media (drives, including SSDs) that use the FIPS 140-2 self-encrypting feature.

-       Shred, Disintegrate, Pulverize, or Incinerate the device. This literally means mechanically destroy the media such that if any 1’s and 0’s remain on the floating transistor gates, it’s not possible to reconstruct these bits into useful data.

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Of note in the NIST document is a footnote that states that Clear and Purge must each be verified. Crypto Erase only needs verification if performed prior to a Clear or Purge. In all of these cases, all procedures except for mechanical eradication rely on mechanisms built into the drive/media by the manufacturer. There is some question if this is as secure as intended and the NSA—America’s gold standard for all things crypto—has only one recommended procedure.

The NSA only allows strong encryption or mechanical shredding, as specified in “NSA/CSS Storage Device Sanitization Manual.” This 2009 document is now a bit difficult to find, perhaps because the NSA is constantly revising its Information Assurance (IA) recommendations to the changing cyberspace threats due to information warfare. Visiting the NSA website on IA requires a DoD PKI certificate per TLS 1.2 and a “current DoD Root and Intermediate Certificate Authorities (CA) loaded” into a browser. Clearly the NSA follows its own recommendations.

The manual is interesting reading in that one has only the choice to cryptographically protect the data (and the keys) and hence not worry about sanitization. Or, one can render the media (drive) completely unrecognizable with zero probability of any data remaining. By “unrecognizable,” think of an industrial shredder or an iron ore blast furnace. When it’s done, there’s nothing remaining.

Recent discussions with government users on this topic reminded me of the Hainan Island Incident in 2001 where a Chinese fighter jet attempting an intercept collided with a US Navy EP-3 SIGINT aircraft. The EP-3 was forced to make an emergency landing on China-controlled Hainan, giving unauthorized access to classified US equipment, data, algorithms and crypto keys (Figure 2). It was a harrowing experience, sadly causing the death of the Chinese pilot and the near-fatalities of the 24 Navy crew.

The crew had 26 minutes to destroy sensitive equipment and data while in the air using a fire axe, hot coffee and other methods, plus another 15 minutes on the ground, but it was widely reported to be only partially successful. While this sounds far-fetched, the topic of sanitizing data is so critical—yet so unresolved, as described above—that allegedly some current-generation equipment includes a visible “Red X” indicating exactly where an operator is to aim a bullet as a last ditch effort to mechanically sanitize equipment.

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of and provided by Lockheed Martin Aeronautics.)

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of and provided by Lockheed Martin Aeronautics.)

From Pulverize to Zeroize

There’s a lot of room between the DoD’s wish to have classified data and programs zeroized and the NSA’s recommendation to pulverize. The middle ground is the NIST spec listed above that relies heavily on flash memory manufacturer’s built-in secure erase options. While there are COTS recommendations for secure erase, they are driven not from a military standpoint but from the need to protect laptop information, Sarbanes-Oxley (corporate) legislation, health records per HIPAA, and financial data.

In Part 2 of this article, I’ll examine some of the COTS specifications built into ATA standards (such as Secure Erase), recommendations presented at Flash Memory Summit meetings, and raise the question of just how much trust one can place in these specifications that are essentially self-certified by the flash memory manufacturers.

[1] Previously, DoD relied on NISPOM 8-306; NSA had NSA 130-2 and NSA 9-12; Air Force had AFSSI-5020; Army had AR 380-19; and Navy had NAVSO P-5239-26. These all appear to be out of date and possibly superseded by the latest 5220.22-M. As a civilian, it’s unclear to me—perhaps a reader can shed some light?

CES Turns VPX Upside Down Using COM

Instead of putting I/O on a mezzanine, the processor is on the mezzanine and VPX is the I/O baseboard.

[ UPDATE: 19:00 hr 24 Apr 2015. Changed the interviewee's name to Wayne McGee, not Wayne Fisher. These gentlemen know each other, and Mr. McGee thankfully was polite about my misnomer. A thousand pardons! Also clarified that the ROCK-3x was previously announced. C. Ciufo ]

The computer-on-module (COM) approach puts the seldom-changing I/O on the base card and mounts the processor on a mezzanine board. The thinking is that processors change every few years (faster, more memory, from Intel to AMD to ARM, for example) but a system’s I/O remains stable for the life of the platform.

COM is common (no pun) in PICMG standards like COM Express, SGET standards like Q7 or SMARC, and PC/104 Consortium standards like PC/104 and EBX.

But to my knowledge, the COM concept has never been applied to VME or VPX. With these, the I/O is on the mezzanine “daughter board” while the CPU subsystem is on the base “mother board”.Pull quote

Until now.

Creative Electronic Solutions—CES—has plans to extend its product line into more 3U OpenVPX I/O carrier boards onto which are added “processor XMC” mezzanines. An example is the newer AVIO-2353 with VPX PCIe bus—meaning it plugs into a 3U VPX chassis and acts as a regular VPX I/O LRU.  By itself, it has MIL-STD-1553, ARINC-429, RS232/422/485, GPIO, and other avionics-grade goodies.

The CES ROCK-3210 VNX small form factor avionics chassis.

The CES ROCK-3210 VNX small form factor avionics chassis.

But there’s an XMC site for adding the processor, such as the company’s MFCC-8557 XMC board that uses a Freescale P3041 quad-core Power Architecture CPU. If you’re following this argument, the 3U VPX baseboard has all the I/O, while the XMC mezzanine holds the system CPU. This is a traditional COM stack, but it’s unusual to find it within the VME/VPX ecosystem.

“This is all part of CES’s focus on SWAP, high-rel, and safety-critical ground-up design,” said Wayne McGee, head of CES North America. The company is in the midst of rebranding itself and the shiny new website found at makes their intentions known.

CES has been around since 1981 and serves high-rel platforms like the super-collider at CERN, the Predator UAV, and various Airbus airframes. The emphasis has been on mission- and safety-critical LRUs and systems “Designed for Safety” to achieve DAL-C under DO-178B/C and DO-254.

“We’ll be announcing three new products at AUVSI this year,” McGee told me, “and you can expect to see more COM-style VPX/XMC combinations with some of the latest processors.” Also to be announced will be extensions to the company’s complete VNX small form factor (SFF) chassis systems, such as a new version of the rugged open computer kit (ROCK-3x)—previously announced in February at Embedded World.

CES is new to me, and it’s great to see some different-from-the-pack innovation from an old-school company that clearly has new-school ideas. We’ll be watching closely for more ROCK and COM announcements, but still targeting small, deployable safety-certifiable systems.

IHS Embedded Ranks VME/VPX Suppliers

With vendor-supplied data, analyst firm IHS ranks the largest embedded suppliers in the VME/VPX market.

[Update 22 Jan 14: Replaced figure with the original slide from IHS; added a link to the entire IHS presentation here.  C. Ciufo ]

At today’s Embedded Tech Trends insider conference in Phoenix, IHS senior analyst Toby Colquhoun revealed the top suppliers in the VME and VPX market space for the year ended 2012 (the latest data available). The conference is sponsored by the standards organization VITA that’s responsible for these open standards.  It’s always a challenge to get quantitative data on this niche market which primarily services the world’s rugged military and aerospace markets with harsh environment modules, connectors and systems.

GE Intelligent Platforms is the largest supplier when VME, VPX and systems are combined, followed by: Curtiss-Wright Controls Defense Systems, Mercury Computer, Kontron, and Emerson Network Power (Figure 1, with apologies for the quality).

IHS ranking of VME and VPX suppliers for 2013, as presented at Embedded Tech Trends conference. (Courtesy: IHS, VITA, ETT.)

IHS ranking of VME and VPX suppliers for 2013, as presented at Embedded Tech Trends conference. (Courtesy: IHS, VITA, ETT.)

Toby also indicated that the VME market is shrinking, as legacy designs migrate to VPX modules and systems. In the VPX-only market for modules and systems, the ranking changes to:

1. Curtiss-Wright at 38 percent

2. GE Intelligent Platforms at 19 percent

3. Mercury Computer at 16 percent.

This ranking is consistent with my own expectations (and CW’s recent press releases proclaiming themselves as number one). Interestingly, when I asked the question about small form factor systems like those from these same suppliers, plus ADLINK, Advantech, MEN Mikro and others, Toby responded that IHS doesn’t see that these kinds of rugged systems are encroaching on the VME/VPX market. I disagree, but can’t quantify that just yet.

We’ll update this data once we receive the actual presentation later today.


End of an embedded era: Emerson De-”Mots” Motorola Embedded

As Emerson Network Power gets sold off to Platinum Equity, Motorola Computer Group, Force Computer, Artesyn, and more names may disappear into the history books soon.

8/7/13 UPDATE: Several people have commented that the napkin analysis below neglects to account for the “power” side of Emerson Network Power. ENP was also partly assembled via acquisition including: Astec, Liebert, and others. A comment also was sent to me saying “The embedded power unit has been on the market for a buyer for quite some time…”  Finally, there are some questions raised about the size of the open standard ATCA/xTCA markets, with one person agreeing with my statement that the telcos are successfully using the standards to build their own hardware. This would reduce the TAM for non-captive vendors like Emerson Network Power. Thank you to all who corresponded with me privately.  C2


Emerson today announced plans to sell 51 percent of Emerson Network Power to Platinum Equity for $300 million. It’s a shame, for sure. But what’s equally interesting are the embedded technologies and their creators leaving the Emerson camp, and how we got to this place.

Embedded Consolidation by Acquisition

Emerson Network Power became the $1.4 billion business it is today partly by acquiring Motorola Embedded Communications Computing in 2007 for $350 million, when ECC’s turnover was about $520 million (2006). The sale closed in 2008.

Perhaps a bargain for Emerson at the time, in the interest of buying “embedded computing products and services to equipment manufacturers in telecommunications, medical imaging, defense and aerospace and industrial automation,” wrote the St. Louis Business Journal at the time. Motorola’s $520 million in sales was added to Artesyn’s $100 million embedded computing business, acquired by Emerson Network Power the year prior, adding up to over $600 million in revenue 2007.

Just three years prior, Motorola was then called “Motorola Computer Group” (MCG) and had acquired the then-heavyweight Force Computers from board-stuffer Solectron. The terms of the agreement were not immediately disclosed, but I was able to ferret the price of $121 million from a footnote on page 47 of Moto’s 2004 10K here. Interestingly, it was slightly prior to this when Motorola spun off their semiconductor operations to Freescale Semiconductor, a separate financial entity at the time. The combined MCG and Force division became known as Motorola Embedded Communications Computing and was all about standards-based telecom and military products like VME, AdvancedTCA, and so on. But mostly about the telcom-focused AdvancedTCA (ATCA).

If you’re following the math, the cumulative total of acquisitions for these embedded technologies was about $721 million to this point. As I recall, Force didn’t belong to Solectron for very long; less than two years, I think. MCG + Force = Moto ECC added up to about 1,500 employees in August 2004, said the press release at the time. The division’s corporate vice president, Wendy Vittori (previously of Dell Computer if memory serves), said at the time: “We will be able to provide solutions for a wider range of customer application needs, supported by a broader portfolio of boards, systems, and services.”

Moto was number one in VME, although they’d ceded the rugged mil/aero market to the likes of Dy4 Systems (later Curtiss-Wright), Radstone, and SBS (both later part of GE Intelligent Platforms) in the late 1990s. Motorola lead the non-mil market with Motorola’s/Freescale’s own PowerPC-based single board computers, whereas Force had leadership in Intel-based SBCs and broader networking products. Wendy was right: it was a pretty decent technology fit, and Motorola was at that time already parlaying their embedded products into the data center and telecom. A year prior, in 2003, Motorola acquired NetPlane Systems, a telecom provider with data and control plane products…and captive customers.

When the Emerson/Motorola deal closed in 2008, an Emerson press release quoted several analysts praising the acquisition. It also said “A significant trend in the embedded computing industry is the adoption of industry standards, including ATCA, MicroTCA and AdvancedMC (AMC/xTCA)…currently more than 40 percent of network equipment providers are shipping ATCA-based systems.”

Present Tense

So far so good. In fact, I’ve followed the industry closely and agree that wired and wireless infrastructure build-outs continue to favor these embedded open standards-based products, and ATCA et al have replaced proprietary telecom equipment. Emerson Network Power’s VME business, I suspect, never recovered since the market for VME (and now the VXS and VPX variants) is almost entirely in defense. (Recall that Motorola walked away from that business ten years ago.) That leaves ATCA, xTCA targeting the telecom markets.

As recently as two months ago (May 2013), the head of the PICMG standards group responsible for ATCA, xTCA and AMC told me how well the telecom markets were growing. You can read my interview with Joe Pavlat here, where Joe estimated the market for ATCA at somewhere between $1.5 billion and 2.5 billion per year.

What happened?

In February 2013 Emerson’s CEO David Farr went on record with Fortune magazine as saying he wants to “double down in businesses that help manufacturers produce their wares” and to focus on cooling products (like air conditioners and chillers for data centers).

This might explain why Emerson would opt to leave this business along with Emerson’s pre-Motorola power business. The press release issued today cites the group’s revenue at $1.4 billion in 2012, probably less than the cumulative total of the price in real dollars of all those acquisitions if you linearize them from 2008. In fact, the group should probably be selling over $2 billion to achieve the correct ROI on all of those acquisitions, but that bumps up against the ATCA TAM cited above by Joe Pavlat of PIGMG. Did Emerson run out of ATCA runway?

That possibly explains the $300 million purchase price for 51 percent, making the overall sale roughly 50 cents on the dollar of last year’s gross sales. That also puts years’ worth of leading-edge VME, control plane, data plane, networking IP, ATCA, xTCA and other embedded technology up for sale by Platinum Equity. Or maybe not.

Sell it, or Keep it?

Who might want this technology? If you assume that no Emerson Network Power customers will be lost in the process (CapEx equipment is not quickly designed out), Emerson’s competitors like Radisys, Kontron, IBM, Dell, and HP already have their own (open-standard) hardware. My bet is that the key value will be any proprietary IP owned by Emerson plus customer relationships (read: backlog). Yet I can not think of a single open-market company that would want to buy this technology that doesn’t already have the core technology. So why buy it?

But Platinum may own a core company that needs Emerson’s technology for themselves: perhaps a telco or wireless provider who wants to produce their own ATCA equipment and not buy it on the open market. This certainly is a viable strategy for a mere $300 million (to start) to buy a multi-billion dollar telecommunications outfit. When asked to comment on this story, PICMG’s Joe Pavlat said: “Platinum Equity is extremely well regarded and has several other significant telecom investments that, at first glance, appear to be very complementary to the Emerson offerings.” Bingo.

So it may be the end of an era–when companies like Motorola, Force, Artesyn, NetPlane–created and implemented open standards-based embedded computers for the telecommunications industry. Hopefully these names and their creations will live on at another recognizable open standards company. But I’m not hopeful; I suspect they’re gone forever and de-Mot’ed  to the history books.

SMARC: ARM’d for a Power Play

ARM is migrating into the embedded board market, at the expense of x86 designs.

ARM is migrating into the embedded board market, at the expense of x86 designs.

In the world of multicore, it’s hard to get more cores than the quads now shipping in the latest smartphones, most of which are based upon ARM. But what about the board-level embedded market that I follow more closely?

You know it’s a foregone conclusion that ARM’s going to win the low power wars here too when even the x86 PC/104 vendors start musing about the need for ARM roadmaps.


WinSystems VP Bob Burckle spins a PC/104 board. The company is considering adding ARM processors to its predominantly x86-based boards.

WinSystems VP Bob Burckle spins a PC/104 board. The company is considering adding ARM processors to its predominantly x86-based boards.

In my discussion with WinSystems–a company that helps drive usually Intel-focused x86 trade consortia–Bob Burckle ponders an open standard form factor for ARM-based single board computers.  .

I’ve come to learn that ADLINK, Congatec, Kontron and others have pushed the very concept of ARM-based SBCs through the Standardization Group for Embedded Technologies (SGET) in a computer-on-module (COM) standard they’re calling Smart Mobility ARChitecture SMARC version 1.0.

Smart Mobility Architecture (SMARC) is a COM processor module ideally suited for ARM processors.

Smart Mobility Architecture (SMARC) is a COM processor module ideally suited for ARM processors. (Courtesy: Standardization Group for Embedded Technologies,

It comes in 82mm x 50mm and 82mm x 80mm flavors, and Kontron is already implementing it for aircraft passenger In-Flight Entertainment systems.Figure 2 Kontron IFE plane cut-away

Look for ARM processors on PC/104, VME, COM Express…and SMARC boards soon. Choices will be from Texas Instruments, Atmel, Qualcomm, NVIDIA, Xilinx, and even AMD (which licensed the ARM for security engines in its APUs).

Kontron SMARC-sAT30 is a low profile platform based SMARC specification and integrates the 1.2 GHz NVIDIA Tegra 3 quad-core ARM processor (Cortex A9).

Kontron SMARC-sAT30 is a low profile platform based SMARC specification and integrates the 1.2 GHz NVIDIA Tegra 3 quad-core ARM processor (Cortex A9).


Rugged Shoebox Computers Still Popular; GE does an about “FACE”

Hint: Bottom line? US Army realizes h/w changes faster than s/w, so FACE tries to make software portable by defining standard interfaces. This may be bad for the h/w vendors, though, as it cuts both ways.


GE Intelligent Platforms has introduced a rugged “shoebox” computer for mil systems called the FACEREF1. I’m scratching my head over the wisdom of the name, but it stands for Future Airborne Capability Environment and is based upon the FACE Consortium’s specs for an open reference architecture. A sub-group of the Open Group (actually “Open Group Managed Consortium”), the FACE Consortium “provides a vendor neutral forum” where industry and government work together to develop best practices and open standards for avionics. (Note to self: Isn’t that what PICMG and VITA do?)

This isn’t the first time GE has developed a rugged shoebox. Back in 2005, SBS Technologies – later acquired by GE if memory serves – rolled out the Rugged Operation Computer (ROC) shown in Figure 1. Launched at AUSA DC in 2005, this 5.75 pound “palm-sized” rugged shoebox was really unique in its day because it bucked the trend of sticking 6U VME cards in ATR boxes. Then about the smallest you could deploy using rugged COTS was a 1/2 ATR (short) whereas the ROC measured 3.5  (H) x 4.2 (D) x 6.8 (W).

Figure 1: The SBS Technologies ROC was among the first COTS rugged shoeboxes, weighing a mere 5.75 pounds in 2006 and was equipped with either a Pentium M or PowerPC CPU in 2006.

That’s roughly one quarter the size of the equivalent VME ATR box. ROC also used proprietary cards inside, though an industry standard PMC card was a factory option. While companies like Dy4, Radstone, Curtiss-Wright and others were relying exclusively on open standards, SBS realized the value was at the system or box level, not the card. Why not put whatever worked inside?  This theory is common today, but not seven years ago.

In 2011, GE also introduced a similar rugged shoebox family – the CRS-C2P-3CC1 and CRS-C3P-3CB1 (what’s with the names, guys?) which this time were based upon standards: 3U CompactPCI from PICMG (Figure 2). They also ran Freescale PowerPCs with a Wind River operating system.


Figure 2: GE’s 3U CompactPCI CRS-C2P-xxx and -C3P-xxx were 2- and 3-slot open standard-based rugged shoeboxes. They were introduced in 2011.

Today’s FACEREF1 shoebox uses GE’s SBC312 SBC (Freescale P4080 8-core), plus a PMCCG1 graphics PMC (S3 2300E GPU) shown in Figure 3. What makes this shoebox unique isn’t really the card, it’s the software premise behind FACE making GE’s rugged shoebox a software reference platform supported by a Wind River hypervisor, Presagis OpenGL for graphics, and the venerable VAPS XT object-oriented HMI tool from Presagis (formerly Virtual Prototypes, or VPI). FACE is sponsored by the US Army’s PEO Aviation, undoubtedly as a way of abstracting hardware to ensure software portability as COTS technology changes much faster than the certified code running it.

Figure 3: GE’s latest rugged shoebox conforms to Future Airborne Capability Environment (FACE), an open platform that defines software interfaces and emphasizes portability to maximize warfighter value.