Does Secure Erase Actually Work?

Chris A. Ciufo, Editor, Embedded Systems Engineering

In this Part 2 of 2, I examine the subject of using the flash manufacturer’s secure erase feature—since so many DoD documents recommend it.

In Part 1 of this blog (“How Does One “Zeroize” Flash Devices?”), I set about finding DoD recommendations for “zeroizing” (or “sanitizing”) sensitive data from flash memory, including flash-based solid state disks (SSDs). Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Note, unless otherwise specified by context, I use the terms “flash”, “flash memory” and “solid state disks” or “SSDs” interchangeably since the results are similar.  SSDs are made up of flash memory plus control, wear and sometimes encryption logic.

Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command, which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Flash in Freefall

In 2010 MLC flash memory hit stride and became cheap and ubiquitous, making security an issue. According to data tracked by John C. McCallum, in 2004 the price per MB of flash ($/MB) was around $0.22; it dropped to $0.01 in 2006 and then hit ~$0.002 in 2009. That is: it dropped about 20x from 2004 to 2006 and another order of magnitude in the next 2-3 years. By 2010, flash moved from expensive boot memory and cheesy 128MB freebie USB sticks to a credible high-density media that would challenge mechanical (rotating media) HDDs.

Computer-ready SSDs arrived on the scene around this time. They were crazy fast, moderately dense, but way more expensive than hard disks of the same size. The speed made them compelling. And it became obvious that important data could be stored on SSDs, but data security would eventually become important. As well, flash stores data differently than magnetic drives and requires a built-in wear-leveling algorithm to assure even “wear out” across internal memory blocks. These issues taken together catalyzed the industry to make recommendations for securely erasing devices to assure data was really gone when a file was deleted.

Industry Recommendations

Let’s start with the recommendations made by industry presented at the Flash Memory Summit in 2010, about the time flash was gaining serious traction. As presented by Jack Winters, CTO of Foremay, numerous industries—including defense—needed a way to securely erase sensitive data stored in SSDs and flash memories. It is not acceptable to delete or reformat an SSD because the data would remain intact. The only way to successfully erase is to “overwrite all user data in allocated blocks, file tables and…in reallocated defective blocks,” said Mr. Winters at the time. Figure 1 represents a summary of the three types of ATA Secure Erase methods.

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Type 1 software-based SE requires a user’s input via keyboard and utilizes a combination of SE command processor, flash bus controller, (ATA) host interface and the device’s sanitize command. The device’s bad block table is erased, rendering the device (or the entire SSD using the flash components) useless for reuse. Type II is a hybrid of software and hardware kicked off by an external line such as GPIO, but logic erases the device(s) to allow flash reuse once the drive is reformatted. For defense customers, it’s unclear to me if Type 1 or Type II is better—the point is to sanitize the data. Reusing the drive, no matter how expensive the drive, is of secondary concern.

Finally, Mr. Winters points out that Type III SE kicks off via external GPIO but involves a high voltage generator along with the controller to destroy the NAND flash transistors within seconds. The drive is not useable—ever—after a “purge”; it’s completely ruined. Note that this kind of erasure isn’t mentioned in the NSA’s “mechanical pulverization” sanitization procedures, and it’s unclear if Type III would meet the NSA’s guidelines for data removal.

These recommended SE procedures for flash made me wonder if the techniques applied to rotating HDDs would also work on SSDs, or if some users might think they are effective at securely sanitizing sensitive data stored on SSDs. After all, if the DoD/NSA recommendations are ambiguous…might users be misapplying them?

Refereed Research: Reliably Erasing SSDs?

An oft-sited refereed paper on the subject of SE appeared in 2011 “Reliably Erasing Data From Flash-Based Solid State Drives”, written by Michael Wei et al. (Note: the 30-minute video of his paper can be found here.) Mr. Wei’s team at UCSD reached three key conclusions:

  • Built-in SE commands are effective…but manufacturers sometimes implement them incorrectly (my emphasis).
  • Overwriting twice is usually, but not always, sufficient.
  • None of the existing HDD techniques for individual file sanitization are effective on SSDs.

This latter point is important: SSDs store data differently than HDDs and therefore require flash-based SE procedures, like the ones described above. According to Wei “the ATA and SCSI command sets [for HDDs] include “secure erase” commands that should sanitize an entire [HDD] disk.” But they don’t work on SSDs. SSDs direct data to raw flash data locations using a logical block address, sort of like a look-up table called the Flash Translation Layer (FTL). This is done for a variety of reasons, from improving speed and wear-out endurance, to “hiding the flash memory’s idiosyncratic interface,” says Wei.

Wei and his colleagues investigated the ATA sanitization commands, software techniques to sanitize drives, and common software to securely erase individual files. Researchers dug deeply into the memories using forensic techniques—which is not unlike what a determined adversary might do when trying to extract classified data from a recovered DoD or military SSD.

Cutting to the chase for the sake of brevity, Wei discovered that trying to sanitize individual files “consistently fail[s]” to remove data. As well, software sanitizing techniques—not built into the drives—are usually successful at the entire drive level, but the overwrite pattern may provide clues to the data or “may impact the effectiveness of overwriting.”

In fact, one of my colleagues from Mercury Computer’s Secure Memory Group (formerly Microsemi) told me that knowing the nature of the original data set provides some clues about that data merely by examining the overwrite patterns. It’s third-order deeply technical stuff, but it all points to the need for built-in flash SE circuitry and algorithms.

Another key point from Wei and his colleagues is that retrieving non- or poorly-sanitized data from SSDs is “relatively easy,” and can be done using tools and disassembly costing under $1,000 (in 2011). Comparable tools to recover erased files from rotating media HDDs was over $250,000. This points to the need for proper SE on SSDs.

Doing It Wrong…and Write

For SSDs, SE is based on the ATA Security “Erase Unit” (ATA-3) command originally written for HDDs in 1995 that “erases all user-accessible areas on the drive” by writing 1’s or 0’s into locations. There is also an Enhanced Erase Unit command that allows the flash vendor to write the best pattern onto the flash devices (and hence the overall SSD) that will render the device or drive “sanitized.” Neither of these commands specifically writes to non-user accessible locations, even though flash devices (and hence SSDs) may contain up to 20 to 50 percent more logic cells for storage, speed, and write endurance purposes. Finally, some drives contain a block erase command that performs sanitizing including non-user accessible locations.

Wei et al’s 2011 data is shown in Figure 2. Clearly, this data is now five years old and the reader needs to keep that in mind. The disturbing trend at the time of the research was that of 12 drives tested, 7 didn’t implement the Enhanced SE function, only one self-encrypted the data (which is a good thing), 3 drives executed SE, but data actually remained on the drive. And drive B reported a successful SE but Wei found that “all the data remained intact” (Wei’s emphasis, not mine).

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Recommendations for Sanitizing

The results shown in Figure 2 prompt the following recommendation from the researchers:

The wide variance among the drives leads us to conclude that each implementation of the security commands must be individually tested before it can be trusted to properly sanitize the drive.

Since these results were published in 2011, the industry has seen many changes as flash memory (and SSD density) increase while prices have fallen. Today, drive manufacturers are cognizant of the need for SE and companies like Kingston even reference Wei et al’s paper, clearly telling their users that the SE commands implemented by the drives must be verified. Kingston states “Kingston SSDNow drives support the ATA Security Command for proper data sanitization and destruction.”

My opinion, after reading piles of data on this topic, is exactly what Wei recommended in 2011: users with sensitive data wishing to sanitize a drive can rely on the ATA Secure Erase command—as long as it’s correctly implemented. To me that means users should test their chosen drive(s) to do their own verification that data is actually gone. When you find a vendor that meets your needs, put their drive under Source Control Drawing and Revision Control and stick with your Approved Vendor List. Buying a different drive might leave your data open to anyone’s interpretation.

Technology, Philosophy, and Kitty Litter: An Interview with VITA’s Ray Alderman

By: Chris A. Ciufo, Editor, Embedded Systems Engineering

Chairman of the Board, Ray Alderman, presents a unique view of how embedded companies compete, thrive and die in the COTS market.

One never knows what Ray Alderman is going to say, only that it’s going to be interesting.  As Chairman of the Board of VITA (and former Executive Director), Ray is a colorful character. We caught up with him to discuss a recent white paper he wrote entitled: “RAW – How This Embedded Board and Systems Business Works.” We posed a series of questions to Ray about his musings; edited excerpts follow.

Chris “C2” Ciufo: Ray, you reference the Boston Consulting Group matrix that places companies in four quadrants, arguing that most of the companies in our embedded COTS industry are Low Volume (LV)/High Margin (HM) “Niche” players. The place not to be is the LV/LM “Graveyard”—right where technologies like ISA, S-100, Multibus and PCI Gen 2 are. But…PCI Express?

RayAldermanRay Alderman: I was careful to say “PCI Express Gen 2.” That’s because Gen 3 is on our doorstep, and then there will be Gen 4, and so on. Gen 2 will be EOL [end of life] before too long. The niche players in our market—all embedded boards, not just VME/VPX—rarely take leadership in mainstream technology. That position is reserved for the four companies that control 75% of the commercial embedded market segment, or $1.5 billion. They are ADLINK, Advantech, congatec, and Kontron: these guys get the inside track with technology innovators like Intel and Nvidia; they’ll have PCIe Gen 4 product ready to ship before the niche players even have the advanced specs. Everyone else has to find other ways to compete.

C2: You said that “in the history of this industry, no company has ever reached $1 billion in sales” because as the volumes go up, customers shift to contract manufacturers to lower their prices. Only three companies ever came close to the HV/LM quadrant. Who were they?

Ray: Advantech, Kontron and Motorola Computer Group (MCG). MCG, you’ll recall, was amalgamated with Force when sold by Solectron, and then morphed into Emerson Computer Group. MCG damn near ruled the VME business back then, but as my model points out—it was unsustainable. Advantech and Kontron are still around, although Kontron is going through some—ahem!—realignment right now. My model and predictions still hold true.

C2: What’s causing this growth-to-bust cycle in the embedded market? Not all markets experience this kind of bell curve: many keep rising beyond our event horizon.

Ray: Since about 1989, the companies that had to sell out or went out of business made one of two basic mistakes: (1) they entered into a commodity market and could not drive their costs down fast enough, or (2) they entered a niche market with a commodity strategy and the volumes never materialized.

I’ve been saying this for a while—it’s practically “Alderman’s Law”—but our military embedded board and system merchant market (all form factors) is about $1.2 billion. The cat litter market in the U.S. is about $1.8 billion, and their product is infinitely less complicated.

C2: Wait—are you really comparing kitty litter to embedded technology?

Ray: By contrast. Cat litter margins are low, volumes are high and they use a complex distribution system to get the litter to cats. Our margins are high, our volumes are low, and we deal direct with the users. The top three companies in the military segment—Abaco [formerly GE Intelligent Platforms], Curtiss-Wright Defense Solutions and Mercury—total up to about $750 million. They’re around $200 million each. They add intellectual value and enjoy high GPM [gross profit margin].

On the other hand, the commercial embedded board market for telecom, industrial, commercial and transportation totals to about $2.0 billion. Using kitty logic, the dry cat food market in the U.S. is about $3.8 billion. Their margins are low, volumes are high, and they use a complex distribution system. The players in the commercial board market have low margins, low volumes (compared to other segments), and sell directly to end users. It’s a terrible place to be. Kitty litter or cat food?

C2: What’s your advice?

Ray: I’m advocating for the military market, where margins are higher. About 61% of the military embedded board/system market is controlled by the three vendors, $750 million. The remaining $450 million (39%) is shared by many small niche vendors: nice, profitable niches. Several smaller companies do $30-50 million in this segment.  In contrast, only four companies control 75% of the commercial embedded boards market, or roughly $1.5 billion. That leaves a mere $500 million (25%) for all of the other smaller companies. Thus there are not many fairly large or profitable niches for these smaller guys—and not many of them do more than $10-15 million. Kitty litter, anyone?

C2: Can you offer some specific advice for board vendors?

Ray: There are only three values you can add to make money in these markets: manufacturing value, service value, and intellectual value. Adding intellectual value is where you add high-level technical skills that other companies do not have. Examples: high speed A-to-D boards where companies like Mercury and Pentek live. You can also add DSPs with unique IP inside. Again, Mercury and Pentek come to mind. In fact, Mercury (then Mercury Computer Systems) proved this model nicely when they invented the RACEway inter-board link and created ASICs to implement it. If you want to raise your GPM, this is how you do it.

In fact, Mercury is still doing it. They bought Echotek some years ago for their I/O boards and just recently bought three divisions of Microsemi. With this latest acquisition, they gain secure storage memories, crypto IP, and a bunch of RF capabilities to add to their existing RF portfolio. Today, RF technology is “magical” and Mercury will be able to charge accordingly for it to maximize their GPM.  Most of the embedded board military suppliers add their value to the market through intellectual value. It makes the most sense.

C2: Is the recipe for success merely targeting niche markets and adding intellectual value?

Ray: I’ll let you in on a little secret. The margin on boards is much higher than the margin on systems. It’s ironic, because every board guy seems to want to get into the systems business, and there have been lots of M&A [mergers and acquisitions] over the past several years. If you’re going to do systems, you’ve got to raise the price, especially if you’re selling air-cooled [convection] systems. Conduction-cooled systems command a higher price, but they’re harder to design.

You also need to choose the niche carefully, but that goes without saying. If you can add intellectual value to your niche—such as high performance GPGPU processing—you can command higher prices, whether at the board- or systems level.

There are only three ways to be successful in the embedded boards and systems business. Be first, be smarter, or cheat. Let me explain.

Being first is usually relegated to the big guys, like Abaco, Curtiss-Wright, or Mercury. They get access to the latest semiconductor technology, which is a fundamental driver in all of our markets. Examples here would be in-advance knowledge of Intel’s Kaby Lake follow-on to the Skylake Core i7 processor, or Nvidia’s plans for their next GPU. The smaller board vendors won’t get access, so they usually can’t be first.

One other thing, the big guys can also adapt a market to them. That is, they have enough influence that they can actually move an entire market. The smaller guys just have to find other ways.

But they can be smarter. Force Computer couldn’t (at the time) beat Motorola’s Computer Group because Motorola was inventing the 68xxx processors back then. So Force switched to the SPARC processor and built a successful business around it.  In effect, Force adapted to a market that was hungry for processing power—it didn’t have to be 68020 or 68040 processing power. [Editor’s note: in fact, the 68040 wasn’t successful because Motorola themselves introduced their PowerPC processor to the market, which was co-developed with IBM. The market moved away from the 68xxx CISC processor to the PPC60x RISC processor; the rest is “history.”]

C2: And lastly, how should companies “cheat” to win?

Ray: It’s hard to cheat in the open market, against big entrenched players. The best way to cheat is to fragment an existing market. Sun Tzu called this the “Divisional” strategy. Companies can create a niche such as by creating an open standard for your version of a board or system architecture. Creating a niche is like being smarter, but is marketing-based instead of being engineering-based.

At VITA/VSO, the policies and procedures allow any company, along with two other sponsors, to write a new standard without interference. There are countless examples of this within VITA, and many of these “fragmented niches” have become successful standards that we use today, including FMC, PMC, and XMC [mezzanine cards]. Older standards like Greenspring [mezzanine modules] were successful but now mostly obsolete. There are other new standards such as the three for rugged small form factors [VITA 73, 74, 75]. And the various OpenVPX profiles are other examples, such as new “Space VPX” and “Space VPX Lite”.

C2: Any last thoughts?

Ray: As Albert Einstein once said, “We cannot solve problems by using the same kind of thinking we used when we created them.” My point: look to new architectures beyond von Neumann’s architecture that the semiconductor guys keep forcing on us. Consider fiber interconnects as a way to get off the copper-trace technology curve. Create a niche—“cheat” if you have to. Just don’t end up following a kitty litter business strategy, else you’ll be taken out with the trash.

How Does One “Zeroize” Flash Devices?

By Chris A. Ciufo, Editor Embedded Systems Engineering

Editor’s Note: This is Part 1 of a two-part article on the topic of securely erasing data in flash devices such as memories and SSDs. In Part 2, I examine the built-in flash secure erase feature intended to eradicate sensitive data and see if it meets DoD and NIST specifications.

I was recently asked the question of how to go about “zeroizing” flash memory and SSDs. I had incorrectly assumed there was a single government specification that clearly spelled out the procedure(s). Here’s what several hours of research revealed:

DoD has no current spec that I could find besides DoD 5220.22-M “National Industrial Security Program[1]. This 2006 document prefaced by the Under Secretary of Defense cancels a previous 1995 recommendation and discusses some pretty specific procedures for handling classified information. After all, the only reason to sanitize or zeroize flash memory is to eradicate classified information like data, crypto keys, or operating programs (software). The document makes reference to media—including removable media (presumably discs, CDs and USB drives at that time)—and the need to sanitize classified data. However, I was unable to identify a procedure for sanitizing the media.

There is, however, a reference to NIST document 800-88Guidelines for Media Sanitization” published in DRAFT form in 2012. A long document that goes into extensive detail on types of media and the human chain of command on handling classified data, Appendix A provides lengthy tables on how to sanitize different media. Table A-8 deals with flash memory and lists the following steps (Figure 1):

-       Clear: 1. Overwrite the data “using organizationally approved and validated overwriting technologies/methods/tools” and at least one pass through by writing zeros into all locations. 2. Leverage the “non-enhanced” ATA Secure Erase feature built into the device, if supported.

-       Purge: 1. Use the ATA sanitize command via a) block erase and b) Cryptographic Erase (aka “sanitize crypto scramble”). One can optionally apply the block erase command after the sanitize command. 2. Apply ATA Secure Erase command, but the built-in (if available) sanitize command is preferred. 3. Use the “Cryptographic Erase through TCG Opal SSC or Enterprise SSC”—which relies on media (drives, including SSDs) that use the FIPS 140-2 self-encrypting feature.

-       Shred, Disintegrate, Pulverize, or Incinerate the device. This literally means mechanically destroy the media such that if any 1’s and 0’s remain on the floating transistor gates, it’s not possible to reconstruct these bits into useful data.

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Of note in the NIST document is a footnote that states that Clear and Purge must each be verified. Crypto Erase only needs verification if performed prior to a Clear or Purge. In all of these cases, all procedures except for mechanical eradication rely on mechanisms built into the drive/media by the manufacturer. There is some question if this is as secure as intended and the NSA—America’s gold standard for all things crypto—has only one recommended procedure.

The NSA only allows strong encryption or mechanical shredding, as specified in “NSA/CSS Storage Device Sanitization Manual.” This 2009 document is now a bit difficult to find, perhaps because the NSA is constantly revising its Information Assurance (IA) recommendations to the changing cyberspace threats due to information warfare. Visiting the NSA website on IA requires a DoD PKI certificate per TLS 1.2 and a “current DoD Root and Intermediate Certificate Authorities (CA) loaded” into a browser. Clearly the NSA follows its own recommendations.

The manual is interesting reading in that one has only the choice to cryptographically protect the data (and the keys) and hence not worry about sanitization. Or, one can render the media (drive) completely unrecognizable with zero probability of any data remaining. By “unrecognizable,” think of an industrial shredder or an iron ore blast furnace. When it’s done, there’s nothing remaining.

Recent discussions with government users on this topic reminded me of the Hainan Island Incident in 2001 where a Chinese fighter jet attempting an intercept collided with a US Navy EP-3 SIGINT aircraft. The EP-3 was forced to make an emergency landing on China-controlled Hainan, giving unauthorized access to classified US equipment, data, algorithms and crypto keys (Figure 2). It was a harrowing experience, sadly causing the death of the Chinese pilot and the near-fatalities of the 24 Navy crew.

The crew had 26 minutes to destroy sensitive equipment and data while in the air using a fire axe, hot coffee and other methods, plus another 15 minutes on the ground, but it was widely reported to be only partially successful. While this sounds far-fetched, the topic of sanitizing data is so critical—yet so unresolved, as described above—that allegedly some current-generation equipment includes a visible “Red X” indicating exactly where an operator is to aim a bullet as a last ditch effort to mechanically sanitize equipment.

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of Wikipedia.org and provided by Lockheed Martin Aeronautics.)

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of Wikipedia.org and provided by Lockheed Martin Aeronautics.)

From Pulverize to Zeroize

There’s a lot of room between the DoD’s wish to have classified data and programs zeroized and the NSA’s recommendation to pulverize. The middle ground is the NIST spec listed above that relies heavily on flash memory manufacturer’s built-in secure erase options. While there are COTS recommendations for secure erase, they are driven not from a military standpoint but from the need to protect laptop information, Sarbanes-Oxley (corporate) legislation, health records per HIPAA, and financial data.

In Part 2 of this article, I’ll examine some of the COTS specifications built into ATA standards (such as Secure Erase), recommendations presented at Flash Memory Summit meetings, and raise the question of just how much trust one can place in these specifications that are essentially self-certified by the flash memory manufacturers.


[1] Previously, DoD relied on NISPOM 8-306; NSA had NSA 130-2 and NSA 9-12; Air Force had AFSSI-5020; Army had AR 380-19; and Navy had NAVSO P-5239-26. These all appear to be out of date and possibly superseded by the latest 5220.22-M. As a civilian, it’s unclear to me—perhaps a reader can shed some light?

Design Resources: USB 3.1 and Type-C

By: Chris A. Ciufo, Editor, Embedded Systems Engineering

An up-to-date quick reference list for engineers designing with Type-C.

USB 3.1 and its new Type-C connector are likely in your design near-future. USB 3.1 and the Type-C connector run at up to 10 Gbps, and Type-C is the USB-IF’s “does everything” connector that can be inserted either way (and never is upside down). The Type-C connector also delivers USB 3.1 speeds plus other gigabit protocols simultaneously, including DisplayPort, HDMI, Thunderbolt, PCI Express and more.

Also new or updated are the Battery Charging (BC) and Power Delivery (PD) specifications that provide up to 100W of charge capability in an effort to eliminate the need for a drawer full of incompatible wall warts.

If you’ve got USB 3.1 “SuperSpeed+” or the Type-C connector in your future, here’s a recent list of design resources, articles and websites that can help get you up to speed.

Start Here: The USB Interface Forum governs all of these specs, with lots of input from industry partners like Intel and Microsoft. USB 3.1 (it’s actually Gen 2), Type-C, and PD information is available via the USB-IF and it’s the best place to go for the actual details (note the hotlinks). Even if you don’t read them now, you know you’re going to need to read them eventually.

“Developer Days” The USB-IF presented this two-day seminar in Taipei last November 2015. I’ve recently discovered the treasure trove of preso’s located here (Figure 1). The “USB Type-C Specification Overview” is the most comprehensive I’ve seen lately.

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

What is Type-C? Another decent 1,000-foot view is my first article on Type-C: “Top 3 Essential Technologies for Ultra-mobile, Portable Embedded Systems.” Although the article covers other technologies, it compares Type-C against the other USB connectors and introduces designers to the USB-IF’s Battery Charging (BC) and Power Delivery (PD) specifications.

What is USB? To go further back to basics, “3 Things You Need to Know about USB Switches” starts at USB 1.1 and brings designers up to USB 3.0 SuperSpeed (5 Gbps). While the article is about switches, it also reminds readers that at USB 3.0 (and 3.1) speeds, signal integrity can’t be ignored.

USB Plus What Else? The article “USB Type-C is Coming…” overlays the aforementioned information with Type-C’s sideband capabilities that can transmit HDMI, DVI, Thunderbolt and more. Here, the emphasis is on pins, lines, and signal integrity considerations.

More Power, Scotty! Type-C’s 100W Power Delivery sources energy in either direction, depending upon the enumeration sequence between host and target. Components are needed to handle this logic, and the best source of info is from the IC and IP companies. A recent Q&A we did with IP provider Synopsys “Power Where It’s Needed…” goes behind the scenes a bit, while TI’s E2E Community has a running commentary on all things PD. The latter is a must-visit stop for embedded designers.

Finally, active cables are the future as Type-C interfaces to all manner of legacy interfaces (including USB 2.0/3.0). At last year’s IDF 2015, Cypress showed off dongles that converted between specs. Since then, the company has taken the lead in this emerging area and they’re the first place to go to learn about conversions and dongles (Figure 2).

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Evolving Future: Although USB 3.1 and the Type-C connector are solid and not changing much, IC companies are introducing more highly integrated solutions for the BC, PD and USB 3.1 specifications plus sideband logic. For example, Intel’s Thunderbolt 3 uses Type-C and runs up to 40 Gbps, suggesting that Type-C has substantial headroom and more change is coming. My point: expect to keep your USB 3.1 and Type-C education up-to-date.

Intel Changes Course–And What a Change!

By Chris A. Ciufo, Editor, Embedded Intel Solutions

5 bullets explain Intel’s recent drastic course correction.

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

I recently opined on the amazing technology gifts Intel has given the embedded industry as the company approaches its 50th anniversary. Yet a few weeks later, the company released downward financials and announced layoffs, restructurings, executive changes and new strategies. Here are five key points from the recent news-storm of (mostly) negative coverage.

1. Layoffs.

Within days of the poor financial news, Intel CEO Brian Krzanich (“BK”) announced that 12,000 loyal employees would have to go. As the event unfolded over a few days, the pain was felt throughout Intel: from the Oregon facility where its IoT Intelligent Gateway strategy resides, to its design facilities in Israel and Ireland, to older fabs in places like New Mexico. Friends of mine at Intel have either been let go or are afraid for their jobs. This is the part about tech—and it’s not limited to Intel, mind you—that I hate the most. Sometimes it feels like a sweatshop where workers are treated poorly. (Check out the recent story concerning BiTMICRO Networks, which really did treat its workers poorly.)

2. Atom family: on its way out. 

This story broke late on the Friday night after the financial news—it was almost as if the company hadn’t planned on talking about it so quickly. But the bottom line is that the Atom never achieved all the goals Intel set out for it: lower price, lower power and a spot in handheld. Of course, much is written about Intel’s failure to wrest more than a token slice out of ARM’s hegemony in mobile. (BTW: that term “hegemony” used to be applied to Intel’s dominance in PCs. Sigh.) Details are still scant, but the current Atom Bay Trail architecture works very nicely, and I love my Atom-based Win8.1 Asus 2:1 with it. But the next Atom iteration (Apollo Lake) looks like the end of the line. Versions of Atom may live on under other names like Celeron and Pentium (though some of these may also be Haswell or Skylake versions).

3. New pillars announced.

Intel used to use the term “pillars” for its technology areas, and BK has gone to great lengths to list the new ones as: Data Center (aka: Xeon); Memory (aka: Flash SSDs and the Optane, 3D XPoint Intel/Micro joint venture); FPGAs (aka: Altera, eventually applied to Xeon co-accelerators); IoT (aka: what Intel used to call embedded); and 5G (a modem technology the company doesn’t really have yet). Mash-ups of these pillars include some of the use cases Intel is showing off today, such as wearables, medical, drones (apparently a personal favorite of BK), RealSense camera, and smart automobiles including self-driving cars. (Disclosure: I contracted to Intel in 2013 pertaining to the automotive market.)

 Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

4. Tick-tock goodbye.

For many years, Intel has set the benchmark for process technology and made damn sure Moore’s Law was followed. The company’s cadence of new architecture (Tock) followed by process shrink (Tick) predictably streamed products that found their way into PCs, laptops, the data center (now “cloud” and soon “fog”). But as Intel approached 22nm, it got harder and harder to keep up the pace as CMOS channel dimensions approached Angstroms (inter-atomic distances). The company has now officially retired Tick-Tock in favor of a three-step process of Architecture, Process, and Process tuning. This is in fact where the company is today as the Core series evolved from 4th-gen (Haswell) to 5th-gen (Broadwell—a sort-of interim step) to the recent 6th-gen (Skylake). Skylake is officially a “Tock,” but if you work backwards, it’s kind of a fine-tuned process improvement with new features such as really good graphics, although AnandTech and others lauded Broadwell’s graphics. The next product—Kaby Lake (just “leaked” last week, go figure)—looks to be another process tweak. Now-public specs point to even better graphics, if the data can be believed.

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

5. Embedded, MCUs, and Value-Add.

This last bullet is my prediction of how Intel is going to climb back out of the rut. Over the years the company mimicked AMD and nearly singularly focused on selling x86 CPUs and variants (though it worked tirelessly on software like PCIe, WiDi, Android, USB Type-C and much more). It jettisoned value-add MCUs like the then-popular 80196 16-bitter with A/D and 8751EPROM-based MCU—conceding all of these products to companies like Renesas (Hitachi), Microchip (PIC series), and Freescale (ARM and Power-based MCUs, originally for automotive). Yet Intel can combine scads of its technology—including modems, WiFi (think: Centrino), PCIe, and USB)—into intelligent peripherals for IoT end nodes. Moreover, the company’s software arsenal even beats IBM (I’ll wager) and Intel can apply the x86 code base and tool set to dozens of new products. Or, they could just buy Microchip or Renesas or Cypress.

It pains me to see Intel layoff people, retrench, and appear to fumble around. I actually do think it is shot-gunning things just a bit right now, and officially giving up on developing low-power products for smartphones. Yet they’ll need low power for IoT nodes, too, and I don’t know that Quark and Curie are going to cut it. Still: I have faith. BK is hell-fire-brimstone motivated, and the company is anything but stupid. Time to pick a few paths and stay the course.

Today’s Tech News…from the News: Slow Growth Forecast

We stitch together a technology forecast based upon several news snippets.

It’s a conscious decision for me to pull my head out of the technology headlines of RSS feeds, embedded Newsletters, and various analyst reports. In fact, sometimes I read the broader media just to follow politics (Yikes! Why bother.), local news (it’s always grim), and financial minutia. For the latter, I turn to FORTUNE Magazine—for which I actually pay real money.

It turns out that U.S. productivity is slowing, according to the article “We Were Promised a 20-Hour Workweek.” Since 2010 our productivity—output per labor hour—is up by only 0.5 percent per year, per the U.S. Bureau of Labor Statistics. This is bad, especially since it is usually 1.6% (1974 – 1994) or 2.8% (1995 – 2004). Productivity is a function of either lower labor costs (the denominator) or higher output (numerator). Technology can affect both numbers.

The Internet of Things: Hope, or Hype? (Courtesy: Wiki Commons; Author: Wilgengebroed.)

The Internet of Things: Hope, or Hype? (Courtesy: Wiki Commons; Author: Wilgengebroed.)

Demand Side

Higher productivity (output) has historically been driven by increases wrought by technology, either by improving the work itself or by creating new industries that created new work. Think steam engine, railroads, industrial revolution, electricity, automobiles, women in the workforce, PCs, The Internet and smartphones.

Check the stats on the last three and while the Internet grows each year, it’s building out slowly into lesser-known and often primitive societies with little money to spend. PCs and smartphones? Flat, baby. So on the demand side, we see that demand for technology is down. Thin demand means tech isn’t working to increase productivity all that much.

Supply Side

On the other side of the coin, we can also see that technology supply is also down. It makes sense…no demand means no supply, unless the stuff is being stockpiled someplace in warehouses. According to a Gartner statement in April 2016, worldwide semiconductor revenues declined 0.6% in 2016. Worrisome, but it’s only April. But the news is worse: this decline is on track to be the second year in a row, following 2015’s 2.3% decline.

So accept that both technology supply and demand are down, and this affects productivity.

Is the root of this a complete failure on the part of tech companies to come up with some “next big thing”?  I believe that indeed is the case. We’ve gone from PCs to smartphones to tablets…and now we’re onward to incremental improvements in each of these. Ultrabook/2:1s/convertibles; larger-screen “phablet” smartphones; thinner PCs and Macs with better processors; and in the case of Windows, touchscreen machines (come on, Apple…give us a touchscreen or an external mouse on an iPad Pro!).

My point? What’s the next great technology product or disruptive thingy waiting in the wings that can increase productivity output?

Cue the Music: IoT to the Rescue?

Experts, journalists and pundits (like me) point to the 20B-50B Internet of Things (IoT) doodads that are “out there.” We have smart homes on the horizon, and I like (not love) my Next thermostat. Self-driving, V2V/V2I (vehicle-to-vehicle/infrastructure) cars might foment something disruptive by saving lives (more workers, less insurance cost) or reducing commute time so workers can be at their desks more. This would increase the numerator (productivity) and denominator (tech demand). In fact, the IoT might revolutionize lots of things in our world.

Steve Case, former CEO of AOL and clearly a Guru when it comes to technology revolutions, believes the third wave of the Internet will soon be upon us. In a FORTUNE article entitled “Steve Case Wants Tech to Love the Government” he forecasts the Internet as being “more seamlessly and pervasively in every aspect of our lives.” This, in fact, is the promise of the IoT.

Yet I’m not ready to buy it all yet. As friend and colleague Ray Alderman, CEO of VITA, recently pointed out to me: a lot of the innovation in technology lately is in software and apps. These (usually) cost little to build but often don’t create major disruptive market changes. Will the next version of Word change your life? How about that Evernote update, or Adobe moving to a cloud-based model? Not even the radical change in CRM wrought by SalesForce.com caused all the salespeople of the free world to sing praise and meet their quotas.

So after a lunchtime reading roundup of the aforementioned stories and headlines, my conclusion is this: we’re slowing down, folks. I’m afraid corporate profits, the stock market, wages, and unemployment are all watching carefully.

Or…we could wait for tomorrow’s news. There’s bound to be a different conclusion to be reached.