Does Secure Erase Actually Work?

Chris A. Ciufo, Editor, Embedded Systems Engineering

In this Part 2 of 2, I examine the subject of using the flash manufacturer’s secure erase feature—since so many DoD documents recommend it.

In Part 1 of this blog (“How Does One “Zeroize” Flash Devices?”), I set about finding DoD recommendations for “zeroizing” (or “sanitizing”) sensitive data from flash memory, including flash-based solid state disks (SSDs). Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Note, unless otherwise specified by context, I use the terms “flash”, “flash memory” and “solid state disks” or “SSDs” interchangeably since the results are similar.  SSDs are made up of flash memory plus control, wear and sometimes encryption logic.

Many of the government’s recommendations rely on the flash manufacturer’s Secure Erase command, which is allegedly based upon the ATA’s recommendations of the same name. Yet research has been done that calls into question either how well this command works or how well manufacturers are implementing it in their devices. Either way, if the DoD allows a “self-policing” policy to protect sensitive data, I have concerns that the data isn’t safely locked up.

Flash in Freefall

In 2010 MLC flash memory hit stride and became cheap and ubiquitous, making security an issue. According to data tracked by John C. McCallum, in 2004 the price per MB of flash ($/MB) was around $0.22; it dropped to $0.01 in 2006 and then hit ~$0.002 in 2009. That is: it dropped about 20x from 2004 to 2006 and another order of magnitude in the next 2-3 years. By 2010, flash moved from expensive boot memory and cheesy 128MB freebie USB sticks to a credible high-density media that would challenge mechanical (rotating media) HDDs.

Computer-ready SSDs arrived on the scene around this time. They were crazy fast, moderately dense, but way more expensive than hard disks of the same size. The speed made them compelling. And it became obvious that important data could be stored on SSDs, but data security would eventually become important. As well, flash stores data differently than magnetic drives and requires a built-in wear-leveling algorithm to assure even “wear out” across internal memory blocks. These issues taken together catalyzed the industry to make recommendations for securely erasing devices to assure data was really gone when a file was deleted.

Industry Recommendations

Let’s start with the recommendations made by industry presented at the Flash Memory Summit in 2010, about the time flash was gaining serious traction. As presented by Jack Winters, CTO of Foremay, numerous industries—including defense—needed a way to securely erase sensitive data stored in SSDs and flash memories. It is not acceptable to delete or reformat an SSD because the data would remain intact. The only way to successfully erase is to “overwrite all user data in allocated blocks, file tables and…in reallocated defective blocks,” said Mr. Winters at the time. Figure 1 represents a summary of the three types of ATA Secure Erase methods.

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Figure 1: Secure Erase (SE) Method Summary, each offering pros and cons. (Courtesy: Flash Memory Summit, 2010; presented by Jack Winters of rugged SSD supplier Foremay.)

Type 1 software-based SE requires a user’s input via keyboard and utilizes a combination of SE command processor, flash bus controller, (ATA) host interface and the device’s sanitize command. The device’s bad block table is erased, rendering the device (or the entire SSD using the flash components) useless for reuse. Type II is a hybrid of software and hardware kicked off by an external line such as GPIO, but logic erases the device(s) to allow flash reuse once the drive is reformatted. For defense customers, it’s unclear to me if Type 1 or Type II is better—the point is to sanitize the data. Reusing the drive, no matter how expensive the drive, is of secondary concern.

Finally, Mr. Winters points out that Type III SE kicks off via external GPIO but involves a high voltage generator along with the controller to destroy the NAND flash transistors within seconds. The drive is not useable—ever—after a “purge”; it’s completely ruined. Note that this kind of erasure isn’t mentioned in the NSA’s “mechanical pulverization” sanitization procedures, and it’s unclear if Type III would meet the NSA’s guidelines for data removal.

These recommended SE procedures for flash made me wonder if the techniques applied to rotating HDDs would also work on SSDs, or if some users might think they are effective at securely sanitizing sensitive data stored on SSDs. After all, if the DoD/NSA recommendations are ambiguous…might users be misapplying them?

Refereed Research: Reliably Erasing SSDs?

An oft-sited refereed paper on the subject of SE appeared in 2011 “Reliably Erasing Data From Flash-Based Solid State Drives”, written by Michael Wei et al. (Note: the 30-minute video of his paper can be found here.) Mr. Wei’s team at UCSD reached three key conclusions:

  • Built-in SE commands are effective…but manufacturers sometimes implement them incorrectly (my emphasis).
  • Overwriting twice is usually, but not always, sufficient.
  • None of the existing HDD techniques for individual file sanitization are effective on SSDs.

This latter point is important: SSDs store data differently than HDDs and therefore require flash-based SE procedures, like the ones described above. According to Wei “the ATA and SCSI command sets [for HDDs] include “secure erase” commands that should sanitize an entire [HDD] disk.” But they don’t work on SSDs. SSDs direct data to raw flash data locations using a logical block address, sort of like a look-up table called the Flash Translation Layer (FTL). This is done for a variety of reasons, from improving speed and wear-out endurance, to “hiding the flash memory’s idiosyncratic interface,” says Wei.

Wei and his colleagues investigated the ATA sanitization commands, software techniques to sanitize drives, and common software to securely erase individual files. Researchers dug deeply into the memories using forensic techniques—which is not unlike what a determined adversary might do when trying to extract classified data from a recovered DoD or military SSD.

Cutting to the chase for the sake of brevity, Wei discovered that trying to sanitize individual files “consistently fail[s]” to remove data. As well, software sanitizing techniques—not built into the drives—are usually successful at the entire drive level, but the overwrite pattern may provide clues to the data or “may impact the effectiveness of overwriting.”

In fact, one of my colleagues from Mercury Computer’s Secure Memory Group (formerly Microsemi) told me that knowing the nature of the original data set provides some clues about that data merely by examining the overwrite patterns. It’s third-order deeply technical stuff, but it all points to the need for built-in flash SE circuitry and algorithms.

Another key point from Wei and his colleagues is that retrieving non- or poorly-sanitized data from SSDs is “relatively easy,” and can be done using tools and disassembly costing under $1,000 (in 2011). Comparable tools to recover erased files from rotating media HDDs was over $250,000. This points to the need for proper SE on SSDs.

Doing It Wrong…and Write

For SSDs, SE is based on the ATA Security “Erase Unit” (ATA-3) command originally written for HDDs in 1995 that “erases all user-accessible areas on the drive” by writing 1’s or 0’s into locations. There is also an Enhanced Erase Unit command that allows the flash vendor to write the best pattern onto the flash devices (and hence the overall SSD) that will render the device or drive “sanitized.” Neither of these commands specifically writes to non-user accessible locations, even though flash devices (and hence SSDs) may contain up to 20 to 50 percent more logic cells for storage, speed, and write endurance purposes. Finally, some drives contain a block erase command that performs sanitizing including non-user accessible locations.

Wei et al’s 2011 data is shown in Figure 2. Clearly, this data is now five years old and the reader needs to keep that in mind. The disturbing trend at the time of the research was that of 12 drives tested, 7 didn’t implement the Enhanced SE function, only one self-encrypted the data (which is a good thing), 3 drives executed SE, but data actually remained on the drive. And drive B reported a successful SE but Wei found that “all the data remained intact” (Wei’s emphasis, not mine).

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Figure 2: Data reported by Wei et al in “Reliably Erasing Data From Flash-Based Solid State Drives”, 2011. This refereed white paper is often sited when discussing the challenges of sanitizing flash memory and flash-based SSDs.

Recommendations for Sanitizing

The results shown in Figure 2 prompt the following recommendation from the researchers:

The wide variance among the drives leads us to conclude that each implementation of the security commands must be individually tested before it can be trusted to properly sanitize the drive.

Since these results were published in 2011, the industry has seen many changes as flash memory (and SSD density) increase while prices have fallen. Today, drive manufacturers are cognizant of the need for SE and companies like Kingston even reference Wei et al’s paper, clearly telling their users that the SE commands implemented by the drives must be verified. Kingston states “Kingston SSDNow drives support the ATA Security Command for proper data sanitization and destruction.”

My opinion, after reading piles of data on this topic, is exactly what Wei recommended in 2011: users with sensitive data wishing to sanitize a drive can rely on the ATA Secure Erase command—as long as it’s correctly implemented. To me that means users should test their chosen drive(s) to do their own verification that data is actually gone. When you find a vendor that meets your needs, put their drive under Source Control Drawing and Revision Control and stick with your Approved Vendor List. Buying a different drive might leave your data open to anyone’s interpretation.

How Does One “Zeroize” Flash Devices?

By Chris A. Ciufo, Editor Embedded Systems Engineering

Editor’s Note: This is Part 1 of a two-part article on the topic of securely erasing data in flash devices such as memories and SSDs. In Part 2, I examine the built-in flash secure erase feature intended to eradicate sensitive data and see if it meets DoD and NIST specifications.

I was recently asked the question of how to go about “zeroizing” flash memory and SSDs. I had incorrectly assumed there was a single government specification that clearly spelled out the procedure(s). Here’s what several hours of research revealed:

DoD has no current spec that I could find besides DoD 5220.22-M “National Industrial Security Program[1]. This 2006 document prefaced by the Under Secretary of Defense cancels a previous 1995 recommendation and discusses some pretty specific procedures for handling classified information. After all, the only reason to sanitize or zeroize flash memory is to eradicate classified information like data, crypto keys, or operating programs (software). The document makes reference to media—including removable media (presumably discs, CDs and USB drives at that time)—and the need to sanitize classified data. However, I was unable to identify a procedure for sanitizing the media.

There is, however, a reference to NIST document 800-88Guidelines for Media Sanitization” published in DRAFT form in 2012. A long document that goes into extensive detail on types of media and the human chain of command on handling classified data, Appendix A provides lengthy tables on how to sanitize different media. Table A-8 deals with flash memory and lists the following steps (Figure 1):

-       Clear: 1. Overwrite the data “using organizationally approved and validated overwriting technologies/methods/tools” and at least one pass through by writing zeros into all locations. 2. Leverage the “non-enhanced” ATA Secure Erase feature built into the device, if supported.

-       Purge: 1. Use the ATA sanitize command via a) block erase and b) Cryptographic Erase (aka “sanitize crypto scramble”). One can optionally apply the block erase command after the sanitize command. 2. Apply ATA Secure Erase command, but the built-in (if available) sanitize command is preferred. 3. Use the “Cryptographic Erase through TCG Opal SSC or Enterprise SSC”—which relies on media (drives, including SSDs) that use the FIPS 140-2 self-encrypting feature.

-       Shred, Disintegrate, Pulverize, or Incinerate the device. This literally means mechanically destroy the media such that if any 1’s and 0’s remain on the floating transistor gates, it’s not possible to reconstruct these bits into useful data.

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Figure 1: Recommended ways to sanitize flash media per NIST 800-88 DRAFT Rev 1 (2012).

Of note in the NIST document is a footnote that states that Clear and Purge must each be verified. Crypto Erase only needs verification if performed prior to a Clear or Purge. In all of these cases, all procedures except for mechanical eradication rely on mechanisms built into the drive/media by the manufacturer. There is some question if this is as secure as intended and the NSA—America’s gold standard for all things crypto—has only one recommended procedure.

The NSA only allows strong encryption or mechanical shredding, as specified in “NSA/CSS Storage Device Sanitization Manual.” This 2009 document is now a bit difficult to find, perhaps because the NSA is constantly revising its Information Assurance (IA) recommendations to the changing cyberspace threats due to information warfare. Visiting the NSA website on IA requires a DoD PKI certificate per TLS 1.2 and a “current DoD Root and Intermediate Certificate Authorities (CA) loaded” into a browser. Clearly the NSA follows its own recommendations.

The manual is interesting reading in that one has only the choice to cryptographically protect the data (and the keys) and hence not worry about sanitization. Or, one can render the media (drive) completely unrecognizable with zero probability of any data remaining. By “unrecognizable,” think of an industrial shredder or an iron ore blast furnace. When it’s done, there’s nothing remaining.

Recent discussions with government users on this topic reminded me of the Hainan Island Incident in 2001 where a Chinese fighter jet attempting an intercept collided with a US Navy EP-3 SIGINT aircraft. The EP-3 was forced to make an emergency landing on China-controlled Hainan, giving unauthorized access to classified US equipment, data, algorithms and crypto keys (Figure 2). It was a harrowing experience, sadly causing the death of the Chinese pilot and the near-fatalities of the 24 Navy crew.

The crew had 26 minutes to destroy sensitive equipment and data while in the air using a fire axe, hot coffee and other methods, plus another 15 minutes on the ground, but it was widely reported to be only partially successful. While this sounds far-fetched, the topic of sanitizing data is so critical—yet so unresolved, as described above—that allegedly some current-generation equipment includes a visible “Red X” indicating exactly where an operator is to aim a bullet as a last ditch effort to mechanically sanitize equipment.

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of Wikipedia.org and provided by Lockheed Martin Aeronautics.)

Figure 2: US Navy EP-3 SIGINT plane damaged in 2001 by collision with Chinese fighter jet. The crew did only a partial sanitization of data. (Image courtesy of Wikipedia.org and provided by Lockheed Martin Aeronautics.)

From Pulverize to Zeroize

There’s a lot of room between the DoD’s wish to have classified data and programs zeroized and the NSA’s recommendation to pulverize. The middle ground is the NIST spec listed above that relies heavily on flash memory manufacturer’s built-in secure erase options. While there are COTS recommendations for secure erase, they are driven not from a military standpoint but from the need to protect laptop information, Sarbanes-Oxley (corporate) legislation, health records per HIPAA, and financial data.

In Part 2 of this article, I’ll examine some of the COTS specifications built into ATA standards (such as Secure Erase), recommendations presented at Flash Memory Summit meetings, and raise the question of just how much trust one can place in these specifications that are essentially self-certified by the flash memory manufacturers.


[1] Previously, DoD relied on NISPOM 8-306; NSA had NSA 130-2 and NSA 9-12; Air Force had AFSSI-5020; Army had AR 380-19; and Navy had NAVSO P-5239-26. These all appear to be out of date and possibly superseded by the latest 5220.22-M. As a civilian, it’s unclear to me—perhaps a reader can shed some light?

AMD’s “Beefy” APUs Bulk Up Thin Clients for HP, Samsung

There are times when a tablet is too light, and a full desktop too much. The answer? A thin client PC powered by an AMD APU.

Note: this blog is sponsored by AMD.

A desire to remotely access my Mac and Windows machines from somewhere else got me thinking about thin client architectures. A thin “client” machine has sufficient processing for local storage and display—plus keyboard, mouse and other I/O—and is remotely connected to a more beefy “host” elsewhere. The host may be in the cloud or merely somewhere else on a LAN, sometimes intentionally inaccessible for security reasons.

Thin client architectures—or just “thin clients”—find utility in call centers, kiosks, hospitals, “smart” monitors and TVs, military command posts and other multi-user, virtualized installations. At times they’ve been characterized as low performance or limited in functionality, but that’s changing quickly.

They’re getting additional processing and graphics capability thanks to AMD’s G-Series and A-Series Accelerated Processing Units (APUs). By some analysts, AMD is number one in thin clients and the company keeps winning designs with its highly integrated x86 plus Radeon graphics SoCs: most recently with HP and Samsung.

HP’s t420 and mt245 Thin Clients

HP’s ENERGY STAR certified t420 is a fanless thin client for call centers, Desktop-as-a-service and remote kiosk environments (Figure 1). Intended to mount on the back of a monitor such as the company’s ProDisplays (like you see at the doctor’s office), the unit runs HP’s ThinPro 32 or Smart Zero Core 32 operating system, has either 802.11n Wi-Fi or Gigabit Ethernet, 8 GB of Flash and 2 GB of DDR3L SDRAM.

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

USB ports for keyboard and mouse supplement the t420’s dual display capability (DVI-D  and VGA)—made possible by AMD’s dual-core GX-209JA running at 1 GHz.

Says AMD’s Scott Aylor, corporate vice president and general manager, AMD Embedded Solutions: “The AMD Embedded G-Series SoC couples high performance compute and graphics capability in a highly integrated low power design. We are excited to see innovative solutions like the HP t420 leverage our unique technologies to serve a broad range of markets which require the security, reliability and low total cost of ownership offered by thin clients.”

The whole HP thin client consumes a mere 45W and according to StorageReview.com, will retail for $239.

Along the lines of a lightweight mobile experience, HP has also chosen AMD for their mt245 Mobile Thin Client (Figure 2). The thin client “cloud computer” resembles a 14-inch (1366 x 768 resolution) laptop with up to 4GB of SDRAM and a 16 GB SSD, the unit runs Windows Embedded Standard 7P 64 on AMD’s quad core A6-6310 APU with Radeon R4 GPU. There are three USB ports, 1 VGA and 1 HDMI, plus Ethernet and optional Wi-Fi.

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Like the t420, the mt245 consumes a mere 45W and is intended for employee mobility but is configured for a thin client environment. AMD’s director of thin client product management, Stephen Turnbull says the mt245 targets “a whole range of markets, including education and healthcare.”

At the core of this machine, pun intended, is the Radeon GPU that provides heavy-lifting graphics performance. The mt245 can not only take advantage of virtualized cloud computing, but has local moxie to perform graphics-intensive applications like 3D rendering. Healthcare workers might, for example, examine ultrasound images. Factory technicians could pull up assembly drawings, then rotate them in CAD-like software applications.

Samsung Cloud Displays

An important part of Samsung’s displays business involves “smart” displays, monitors and televisions. Connected to the cloud or operating autonomously as a panel PC, many Samsung displays need local processing such as that provided by AMD’s APUs.

Samsung’s recently announced (June 17, 2015) 21.5-inch TC222W and 23.6-inch TC242W also use AMD G-Series devices in thin client architectures. The dual core 2.2 GHz GX222 with Radeon HD6290 powers both displays at 1920 x 1080 (HD) and provides six USB ports, Ethernet, and runs Windows Embedded 7 out of 4GB of RAM and 32 GB of SSD.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Said Seog-Gi Kim, senior vice president, Visual Display Business, Samsung Electronics, “Samsung’s powerful Windows Thin Client Cloud displays combine professional, ergonomic design with advanced thin-client technology.” The displays rely on the company’s Virtual Desktop Infrastructure (VDI) through a centrally managed data center that increases data security and control (Figure 3). Applications include education, business, healthcare, hospitality or any environment that requires virtualized security with excellent local processing and graphics.

Key to the design wins is the performance density of the G-Series APUs, coupled with legacy x86 software interoperability. The APUs–for both HP and Samsung–add more beef to thin clients.

 

What’s the Nucleus of Mentor’s Push into Industrial Automation?

Mentor’s once nearly-orphaned Nucleus RT forms the foundation of a darned impressive software suite for controlling meat packing or nuclear power plants.

GlassesEveryone appreciates an underdog—the pale, wimpy kid with glasses and brown polyester sweater who gets routinely beaten up by the popular boys—but sticks it out day after day and eventually grows up to create a tech start-up everyone loves. (Part of this story is my personal history; I’ll let you guess which part.)

So it is with Mentor’s Nucleus RTOS, which the company announced forms the basis for the recent initiative into Industrial Automation (I.A.). Announced this week at the ARC Industry Forum in Orlando is Mentor’s “Embedded Solution for Industrial Automation” (Figure 1).  A cynic might look at this figure as a collection of existing Mentor products…slightly rearranged to make a compelling argument for a “solution” in the I.A. space.  That skinny kid Nucleus is right there, listed on the diagram. Oh, how many times have I asked Mentor why they keep Nucleus around only to get beaten up by the big RTOS kids!

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure.

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure. 

After all, you’ll recognize Mentor’s Embedded Linux, the Nucleus RTOS I just mentioned, and the company’s Sourcery debug/analyzer/IDE product suite. All of these have been around for a while, although Nucleus is the grown-up kid in this bunch. (Pop quiz: True or False…Did all three of these products came from Mentor acquisitions? Bonus question: From what company(ies)?)

Into this mix, Mentor is adding new security tools from our friends at Icon Labs, plus hooks to a hot new automation GUI/HMI called Qt. (Full disclosure: Icon Labs founder Alan Grau is one of our security bloggers; however, we were taken by surprise at this recent Mentor announcement!)

Industry 4.0: I.A. meets IoT

According to Mentor’s Director of Product Management for Runtime Solutions, Warren Kurisu (whose last name is pronounced just like my first name in Japanese: Ku-ri-su), I.A. is gaining traction, big time. There’s a term for it: “Industry 4.0”. The large industrial automation vendors—like GE, Siemens, Schneider Electric, and others—have long been collecting factory data and feeding it into the enterprise, seeking to reduce costs, increase efficiency, and tie systems into the supply chain. Today, we call this concept the Internet of Things (IoT) and Industry 4.0 is basically the promise of interoperability between currently bespoke (and proprietary) I.A. systems with smart, connected IoT devices plus a layer of cyber security thrown in.

Mentor’s Kurisu points out that what’s changed is not only the kinds of devices that will connect into I.A. systems, but how they’ll connect in more ways than via serial SCADA or FieldBus links. Industrial automation will soon include all the IoT pipes we’re reading about: Wi-Fi, Bluetooth LE, various mesh topologies, Ethernet, cellular—basically whatever works and is secure.

The Skinny Kid Prevails

Herein lies the secret of Mentor’s Industrial Automation Solution. It just so happens the company has most of what you’d need to connect legacy I.A. systems to the IoT, plus add new kinds of smart embedded sensors into the mix. What’s driving the whole market is cost. According to a recent ARC survey, reduced downtime, improved process performance, reduced  machine lifecycle costs—all of these, and more, are leading I.A. customers and vendors to upgrade their factories and systems.

Additionally, says Mentor’s Kurisu, having the ability to consolidate multiple pieces of equipment, reduce power, improve safety, and add more local, operator-friendly graphics are criteria for investing in new equipment, sensors, and systems.

Mentor brings something to the party in each of these areas:

- machine or system convergence, either by improved system performance or reduced footprint

- capabilities and differentiation, allowing I.A. vendors to create systems different from “the other guys”

- faster time-to-money, done through increased productivity, system design and debug, or anything to reduce the I.A. vendor’s and their customer’s efforts.

Graphic - Industrial Automation Flow

Figure 2: Industrial automation a la Mentor. The embedded pieces rely on Nucleus RTOS, or variations thereof. New Qt software for automation GUI’s plus security gateways from Icon Labs bring security and IoT into legacy I.A. installations.

Figure 2 sums up the Mentor value proposition, but notice how most of the non-enterprise blocks in the diagram are built upon the Nucleus RTOS.

Nucleus, for example, has achieved safety certification by TÜV SÜD complete with artifacts (called Nucleus SafetyCert). Mentor’s Embedded Hypervisor—a foundational component of some versions of Nucleus—can be used to create a secure partitioned environment for either multicore or multiple processors (heterogeneous or homogeneous), in which to run multiple operating systems which won’t cross-pollute in the event of a virus or other event.

New to the Mentor offering is an industry-standard Qt GUI running on Linux, or Qt optimized for embedded instantiations running on—wait for it—Nucleus RTOS. Memory and other performance optimizations reduce the footprint, boot faster, and there are versions now for popular IoT processors such as ARM’s Cortex-Mx cores.

Playground Victory: The Take-away

So if the next step in Industrial Automation is Industry 4.0—the rapid build-out of industrial systems reducing cost, adding IoT capabilities with secure interoperability—then Mentor has a pretty compelling offering. That consolidation and emphasis on low power I mentioned above can be had for free via capabilities already build into Nucleus.

For example, embedded systems based on Nucleus can intelligently turn off I/O and displays and even rapidly drive multicore processors into their deepest sleep modes. One example explained to me by Mentor’s Kurisu showed an ARM-based big.LITTLE system that ramped performance when needed but kept the power to a minimum. This is possible, in part, by Mentor’s power-aware drivers for an entire embedded I.A. system under the control of Nucleus.

And  in the happy ending we all hope for, it looks like the maybe-forgotten Nucleus RTOS—so often ignored by editors like me writing glowingly about Wind River’s VxWorks or Green Hill’s INTEGRITY—well, maybe Nucleus has grown up.  It’s the RTOS ready to run the factory of the future. Perhaps your electricity is right now generated under the control of the nerdy little RTOS that made it big.

Getting a bead on the bad guys: COTS-based soft information fusion merges military C4ISR data with web and other sources

A military analyst or command and control operator could soon get much better INTEL by combining military data with information from the web.

Bottom Line: I’m unaware of anyone else yet offering a COTS sensor fusion product that combines hard and soft information sources to take advantage of Internet intelligence.

[Update 4:45pm PDT 19Mar13: corrections from "data" to information; added explanation on API and MSCT output; corrected GMTI from plots to tracks.]

Cope Tiger 13

(Courtesy: US Air Force.)

Picture this scenario: a BDU khaki-uniformed DoD analyst is staring at multiple screens of intelligence (INTEL) data and images pertaining to an unmarked ship off the coast of some unnamed country. The ship’s actions have been odd, and the Coast Guard had been tracking it for some time until it went into international waters. New satellite images now show the ship at anchor in a different location than yesterday. What’s it doing there? Are the ship’s intentions nefarious? Who is aboard, and what cargo is aboard?

This kind of scenario vexes joint military forces, Homeland Security, and myriad three-letter agencies.

The challenge for any analyst is to make decisions based upon actionable intelligence by combining every scrap of information into a situational awareness picture that maximizes what the human does best: make a decision or recommendation.  The problem with data for DoD and CIA analysts is there’s either not enough of it, or there’s too much. It’s hard to make a decision with limited information; and it’s too time consuming to dedicate an analyst’s time to culling through SAR images, GMTI (ground moving target indicator) tracks, satellite photos, transcribed radio chatter, action reports, and so on.

As well, decisions are made using more than mere “data”. Sophisticated or low-level sensor outputs are “data” (such as L0/L1 trackers), but other non-traditional asymmetric information not currently in a structured data set might also be relevant and useful to an analyst’s task.

Larus Technologies aims to change all of that with the announcement of their high level information fusion engine (HLIFE) that melds a “collection of commercially available embedded software modules for C4ISR and Security systems” into an information fusion model. Based on the company’s patent-pending adaptive behavioral learning and predictive modeling algorithms, multiple sensing modalities can now be combined together to provide a more complete C4ISR and INTEL picture for analysts.

Larus Technologies' COTS sensor fusion product uses proprietary algorithms to fuse hard military data with soft, unstructured data like web pages or civilian data bases.

Larus Technologies’ COTS sensor fusion product uses proprietary algorithms to fuse hard military data with soft, unstructured data like web pages or civilian data bases.

But the company’s product is not just one Big Data MUX.  Instead, it intelligently combines a mixture of DoD, government and other “hard” and structured data sources with “soft” unstructured sources such as weather reports, search and rescue operator reports, human intelligence (HUMINT), flight schedules, web sites, and myriad other web-based information.

The company’s Total::Insight product is a commercial solution that can immediately leverage high level information fusion and computation intelligence based upon the DoD’s Joint Director of Labs (JDL) data fusion Model. The software performs behavior analysis through predictive modeling, and is “capable of dealing with heterogeneous (multi-source, multi-sensor) data.” The HLIF engine fuses: anomaly detection, trajectory prediction, intent assessment, threat assessment, adaptive learning (situational and procedural). Details on these algorithm components can be found in their white paper “Total Maritime Domain Awareness“, which requires registraion.

This company is new to me, but the concept of offloading an operator/analyst by providing more upstream intelligence is not. Raytheon’s multi-source correlator tracker (MSCT) does something similar with military data sources such as tactical sensors. In contrast, Larus says that they are a neutral COTS vendor that can take output from products like MSCT as well as provide an API so customers can “direct the output (i.e. alerts, warnings, suggested actions) out to their favorite command and control systems.”

Still, I’m unaware of anyone yet offering a COTS product that combines hard and soft data–rather, information–sources to take advantage of Internet-based intelligence. I’ll be watching Larus Technologies; you should too.