Posts Tagged ‘top-story’

Next Page »

From Cable-TV Scrambling to Security: Simple Obfuscation Isn’t Enough

Wednesday, September 5th, 2018

A Dutch company that’s been securing electronic content delivery for decades wants to bring its experience to IoT. Are pay-TV hackers all that different from hackers of IoT?

Mark Hearn, Head of IoT Security, Irdeto

Mark Hearn is the Head of IoT Security at Irdeto. He is responsible for leading Business Development strategies to secure organization’s IoT applications and connected devices. Mark has been with Irdeto since 2003, through Irdeto’s acquisition of Cloakware. Mark is a seasoned Product Management executive with 20 years of bringing technology and business requirements together to solve market problems, particularly within Media Entertainment and Security markets. In addition to being a product leader in the private sector, Mark has also provided Business Analysis security consulting into the Canadian government and has spoken at security conferences. Mark holds a Bachelor of Computer Science from Acadia University in Nova Scotia, Canada and has received certifications in Product Management, Technical Marketing and Strategic Marketing.

Irdeto (pronounced “Ear-debt-oh”) has been in the security business for around 50 years. Irdeto started out in paid television, creating conditional access via techniques like cable TV scrambling. The higher-end content was scrambled so that pay-TV operators could provide tiered levels of content to consumers. Later on, Irdeto expanded into security services covering a wide range of different technologies targeting media. In 2007, Irdeto acquired the Canadian start-up Cloakware to create a software-based security system. Irdeto is the leading conditional access vendor in the world today.

However, Irdeto sees tremendous opportunity to contribute to IoT security. A hacker doesn’t view their target as an elevator, car, or a control system; it’s just a computer. The hacking community is constantly sharing information about how a vulnerability translates to different operating systems, or different chipsets, and so forth. Irdeto has been tuned in to the hacker mentality for decades and can bring that knowledge into the IoT market.

Irdeto looks forward to applying decades of experience in securing electronic content delivery to securing IoT. I sat down with Irdeto’s Mark Hearn, Head of IoT Security, to discuss how Irdeto’s long experience with hackers makes a difference when dealing with IoT.

Figure 1:  Major components and functionality of Irdeto’s Cloakware Software Protection solution for improving embedded security. (Image: Irdeto)

Lynnette Reese, (LR): How is the original mission of securing pay-TV delivery from hackers changed in the last four decades? What do pay-TV signaling and IoT have in common?

Mark Hearn, (MH): The first concern is security in both cases. By applying techniques to thwart hackers, Irdeto makes IoT more secure. We open up new business opportunities, new ways of doing business for customers launching into IoT so that they can take advantage of all the benefits of IoT. Think about an industrial control system. Inherently, some security has always been in place with proprietary OSes and chipsets. That kind of model has always been safer because it’s not connected to the outside world. But for the full benefits of IoT, they have to connect, which creates a new variable in an existing business, including the whole security aspect of it, and of what it means to be connected. This is where Irdeto gets involved.

LR: Would you say that you are more of a consulting company now? Or do you have a line of specific products or solutions?

MH: We have a list of products, and we also have a services division. We approach it from the perspective that we are the security partner of our customers. We want to help them understand the threats, the risks that they engage in with what they want to do, and how Irdeto can help them build a strategy. With long experience working with entertainment content, including large concerns in Hollywood, we can help them determine the right balance of investment-to-risk in a given release, and then how they can continually evolve so that they are staying ahead of the hacking community. Irdeto services can help with threat-risk assessments and identify the security requirements. Both Irdeto and partner technology can then be implemented or put into products to secure them. We also have a number of services, including monitoring the dark web and initiating forensics that identifies trends within the hacker community that may affect customers. Irdeto takes the view that security is an ongoing, everyday activity, like breathing.

LR: Are your products mostly software?

MH: While we do interwork with hardware security, we base much of our products around software that’s in the firmware or the application layer. And for any interaction between a device and the network, we would place some elements in the system to ensure that communication is secure. We do not work as much on the cloud side as we do on the edge, down with the devices. A hacker could take a customer’s machine, separate it from the network, reverse engineer it, and figure out what it does. Once they figure that out, how do they scale it across every other device? This is what we stop.

LR: What is Irdeto’s experience with hackers?

MH: Through the pay-media space, we have been in some pretty deep battles with hackers. After several years, we have figured out the hacker lifecycle and how they try to monetize it.

LR: How does that work?

MH: Any tampering or reverse engineering starts with analysis. It could be an analysis of timing at the hardware level or merely in software using debug tools. Hackers follow the flow of code, look at memory, and so forth. Irdeto starts by making that part of the process for hackers very difficult.

LR: How?

MH: Irdeto’s Cloakware® Software Protection (CSP) technology goes beyond simple obfuscation and analyzes the complete application code at a global level. By “simple obfuscation,” I mean that you take a finished binary and modify it a little bit. CSP induces algorithmic transformations to affect the code and even embedded data in a non-local fashion, entangling it all so that it’s much harder than simple obfuscation to reverse-engineer.

LR: Isn’t obfuscation a bit like taking all the pieces in a jigsaw puzzle box and throwing them up in the air, with one or more pieces hiding under the couch?

MH: Yes, but with simple obfuscation, reverse engineering can gather all the pieces of the puzzle, including the ones under the couch, and still put them back together. Cloakware Software Protection, instead of taking a binary and just jumbling it up, does things with the map at the very beginning, using algorithms. Continuing the puzzle analogy, you could say that CSP modifies how the puzzle looks before it’s ever printed. So even if a hacker can find all of the pieces, he still cannot put them back together again because they don’t make any sense. Only a legitimate user would be able to do it. We also offer technologies where CSP will even withhold “printing” some of the puzzle pieces until it is already executing and you know that it’s tamper-free.

LR: Setting aside the jigsaw puzzle analogy, what does CSP do?

MH: CSP is based on a random seed that you inject into the code at the beginning of the process. For example, someone could create C++ code and compile it, creating what we call a “clear version” that’s fully readable by anyone. With CSP, you put the “clear” code through the CSP tool, then compile it through the same compiler to produce a “cloaked” version. In this way, we build the security techniques into the source code from the beginning. CSP creates legitimate source code that is wholly mangled from a human point of view, but it’s mathematically linked together.

LR: And this doesn’t cause a problem for the compiler?

MH: There are some aspects that can cause the compiler to get a bit antsy, but we follow the same testing standards as all of the other compilers that are out there. We regularly test on par or better than what Visual Studio does with its test suite, its standardized compiler tests. CSP generates semantically correct C and C++ code based upon the original code.

LR: Can’t the code be logically vulnerable?

MH: Even if someone left a buffer overflow in his original source code, CSP would create a more secure buffer overflow. The beauty of it is that it would be effortless to discover in the original code–although it would still be a bug after CSP, it would be tough to find, thus difficult to exploit. If hackers try to inject anything, our code is so mathematically intertwined that it’s very brittle to change, and the program would break.

LR: What other kinds of tricks do you use to make it difficult for hackers?

MH: Besides CSP, we have several techniques built over 20 years for this kind of technology. We also perform many different binary methods after compilation, like checking to see if debuggers are running. We do binary checking techniques of the output at runtime to make sure that someone is not scanning memory while a device is running so that we know that someone isn’t using a debugger or that type of thing.

LR: Are these binary tools more of a bolt-on thing?

MH: Yes. In general terms, we have libraries that get linked to the binary. What happens is that while the program is executing, a binary is checking to make sure that code signatures have not changed and that a memory signature has not changed. We have an encrypted image to know if there is anything that could be changing during runtime, and we detect that.

LR: This type of security adds an extra step. What labor is involved?

MH: This adds an extra step to the production process. But the labor involved is reasonable for the protection you get. We recently did training with a new customer. Within the first afternoon, their guy was able to get a protected application out with blanket protection and nothing specific. But within four hours they had output fully integrated and ready to go. After that, they can customize based on the security strategy we worked out earlier—one founded on risk analysis.

LR: How do these products and technologies work?

MH: Cloakware Software Protection is a suite of advanced technologies, libraries, and tools that enable users to customize protection, whether on an application on a phone, or embedded in firmware, or on any device. It also allows renewable software security; an updated key can utterly re-jigger security. Cloakware supports C, C++, Swift, Web Assembly, JavaScript, iOS, Android, Linux, Mac OS X, Windows, and others. Another product, called Cloakware Secure Environment, uses CSP techniques to create a hardened OS that runs only the software that you specify, signs all binaries to prevent modification, encrypts certificates and resources, and protects the system with a hardware-rooted chain of trust.

LR: What type of products has Irdeto protected so far?

MH: Cloakware protects software and applications on more than five billion devices including PCs, set-top boxes, mobile handsets, portable media players, and more. Our pedigree has been built with Digital Rights Management (DRM), conditional access, defeating hackers on Blu-ray discs, and things like that. Media is probably one of the most hacked markets. Much of Irdeto’s success has been accomplished through battle-hardened experience, and we want to bring this experience to the IoT space.

Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades

Plugging Security Vulnerabilities in Servers and Data Center Architectures

Monday, March 26th, 2018

Adding secure boot to virtually any system offers designers a new tool for thwarting would-be hackers.

Security has been a buzz word for several years across numerous applications. Not surprisingly, defense, military, and government systems were first to implement significant security countermeasures. Other markets such as communications, data center, and embedded applications have been inconsistent in implementing real security measures. Unfortunately for many companies, security is still a check list item and not a hard requirement in the initial architecture of the product. Understandably, companies which have had their brands significantly tarnished by hacks have become serious about implementing layers of security in their end products. With so much to lose in terms of dollars and stock valuation, the server and data center markets have become increasingly more serious about securing their systems, and the security steps being implemented in these applications can be used throughout many other markets.

In a server or data center architecture, the heart of the system is a high-performance processor doing the heavy lifting for calculating and transporting data. In the vast majority of the cases, an Intel-based processor is being used, with significant effort spent on ensuring the processor is secure. This article is not focused on this area because numerous solutions to address the vulnerabilities already exist. What is often overlooked and underappreciated are the other processors in a system.

For example, a server’s bus management controller (BMC) poses a backdoor security vulnerability. If a hacker were to obtain access to a BMC, he could have direct access to the motherboard of the host system. Such access often means the ability to reboot and reinstall the host server. And many times these interfaces provide interactive keyboard, video, and mouse (KVM) support for virtual media access. Even worse, if the BMC allows root access, the hacker could control the I2C and other control buses on the host system.

Many architectures today have multiple processors. The main processor is frequently fortified, while the other control/monitor or bus processors are an afterthought. Another common error occurs when designers expect the software team to implement the security required and also believe that no hardware security is necessary. These preconceptions usually result in a system which is vulnerable to many different types of attacks.

Figure 1: SAS expander block diagram


Secure Boot Vulnerability
In the system shown in Figure 1, an SAS expander blade provides access to the hard drives. If this blade were hacked the data would be accessible and could be extracted. The Marvell SoC, which runs Linux and is responsible for the network management, does not have secure boot capability; it simply boots from external SPI flash. Secure boot security is designed to protect a system against malicious code being loaded and executed early in the boot process—before the operating system has been loaded. This approach aims to prevent malicious software from installing malware or a root kit and maintaining control over a system to hide its presence.

If a hacker were to obtain physical access to the system or connect to the system via network probing using Intelligent Platform Management Interface (IPMI) commands, she could create a backdoor account with administrative privileges. Although there are software steps that can be taken such as requiring stronger passwords, filtering untrusted networks, and not allowing backdoor accounts, these types of software barriers are not adequate. What is required are layers of security achievable by both hardware and software implementations. Ensuring a processor boots securely is not an issue limited to servers and data center applications, and the techniques described next can be implemented for virtually any system.

Implementing secure boot for a system should encompass both physical and remote network attacks. Although some systems are physically protected, one should not assume a single layer of security is adequate. It was recently proven that keys can be extracted by inexpensive electromagnetic probes several meters away from a product by using differential power analysis (DPA). The multi-layer security suggested for securely booting a processor involves authenticity, integrity, and confidentiality. See Figure 2 for how the previous SAS expander board can be modified to implement secure boot.

Figure 2: SAS Expander with secure boot

What is most notable is how little affected the overall design is by the modification to implement secure boot. In fact,  only two hardware modifications to the system take place. One is the FPGA/CPLD being migrated to a low-density flash-based SmartFusion2 SoC FPGA, and the other is connecting the SPI flash to the SoC FPGA instead of directly to the Marvell SoC. In this implementation, there are minimal if any required software modifications for the Marvell SoC, as the SmartFusion2 is doing the majority of the authenticity, integrity, and confidentiality to ensure a secure boot of this design. The SmartFusion2 device is chosen because of five key reasons. It has on-chip secure flash to store keys and other data, and the device contains an Arm Cortex M3 to provide the needed flexibility. Also, various security protocols are built in, the devices are inexpensive and most importantly, the SmartFusion2 device is the only low-density SoC FPGA with built-in DPA countermeasures.

Implementing Secure Boot—Authenticity, Integrity and Confidentiality
The security layers to implement secure boot involve three steps—authenticity, integrity and confidentiality. Only after each step checks out does the system proceed to the next step. If all are checked, then and only then will the system boot. Authenticity is achieved by checking an RSA signature in the SPI flash. At power up, the SmartFusion2 will read a particular location, which is known only by the developer of the system, and this location in the SPI flash contains the signature. The manufacturer signs the RSA key based on their private key, and the SmartFusion2 contains the public key, enabling an RSA algorithm to ensure the signature checks out. If it does, then it would move on to the next step, which is integrity.

The integrity step involves the SmartFusion2 calculating a hash on the contents in the SPI flash. Hash algorithms are cryptographic functions performed on variable size data formats to create a fixed size value and are deterministic such that the same input data provides the same hash value. They are also quick to run. A small change on the input data results in a hash value that is uncorrelatable to the original. Finally, no two different data sets will output the same hash value. A common secure hash algorithm is SHA-256. During the integrity step, the SmartFusion2 would calculate a SHA-256 output and check that against a value stored in its on-chip flash. If the output matches what is stored, then the next step is allowed.

The confidentiality step is implemented by requiring the contents of the SPI flash to be encrypted. At power up, after the authenticity and the integrity check are completed, the SmartFusion2 decrypts the contents of the SPI flash before passing them on to the Marvell SoC. The decryption is done based on a key stored in the flash of the SmartFusion2. If the contents of the SPI flash were modified, then the key would not correctly decrypt the data, and the system would not be allowed to boot. Because we are leveraging an SoC FPGA, after each step a custom response can be implemented. For example, if any of the steps fail, designers can power down all the supplies or as each step passes, illuminate an LED or store the information to view during system maintenance. There are numerous other possible responses that can be implemented.

Secure boot will certainly be added in more embedded processors as next-generation architectures come to fruition. Until that time, this secure boot layer of security can be added to virtually any system, as most processors boot from SPI or other types of flash. It is exciting the know that this implementation causes minimal disruption of the system architecture. Given this fact and the major benefits it provides, designers should seriously consider implementing this type of security to their system and control processors.

Ted Marena is the director of FPGA/SOC marketing at Microsemi. He has over 20 years’ experience in FPGAs. Previously Marena has held roles in design engineering, technical sales/support, business development, product and strategic marketing. He was awarded Innovator of the Year in February 2014 when he worked for Lattice Semiconductor. Marena has defined, created and executed unique marketing platform solutions for vertical markets including consumer, wireless small cells, industrial, cameras, displays and automotive applications. Marena holds a Bachelor of Science in electrical engineering Magna Cum Laude from the University of Connecticut and a MBA from Bentley University’s Elkin B. McCallum Graduate School of Business.


The Three Essential Ingredients for Ensuring Device Security

Friday, March 16th, 2018

 Embedded security is more than a feeling, notes the CTO of a company responsible for the security underpinning over a billion smartphones.

What does it take to secure a device? Beyond the nuances of the embedded world, there are three core principles at play that professionals in embedded security need to understand.

Simplicity. Legitimacy. And trust.

“Security” is an interesting term. The dictionary defines it as both the “safety against criminal activity” and “the state of feeling safe.”

Figure 1: Before the “main event,” i.e., that of an OEM service enrolling a small embedded device, such as the Nuvoton M2351 powered by ARM Cortex-M23, shown here, much takes place backstage.

The perception of security is extremely important. The Web as we know it didn’t really take off until mechanisms and procedures were put in place to protect against criminal activity and give consumers that all-important confidence that it was safe to shop online. Before the advent of 128bit SSL, the Internet was little more than a glorified encyclopedia.

The embedded world being a rather different place, there is not a monolithic global application or “public” to convince about that “state of feeling safe.” Many embedded applications have gotten along just fine with scant regard for security—either in a technical or emotional sense. However, things are changing, and we can blame the Internet for that.

The opportunities for connected devices are immense—whether “connected” means “to the Internet” or simply a local network. The security risks are there as soon as it is possible for the bad guys to inject themselves into the conversation. Many legacy systems were designed to be physically inaccessible to attack, but physical boundaries no longer apply. This includes factories connecting closed networks to the cloud and critical in-car/in-plane networks being co-opted for infotainment and other new purposes. These scenarios could open up the possibility of indirect online attacks—and even physical attacks cannot be discounted, as device owners try to subvert software-enforced limitations on features or performance.

In the Internet, security was a huge enabler. The costs were localized to a few browser and web server authors, and the benefits were there for anyone wanting to set up or use an online store. The rest, as they say, is history.

What then of the embedded world? Security may be just as necessary—but is there a similar benefit? How does a device maker even “do” security—and what would they get out of it—other than the feeling that they are less likely to be at risk? These are the questions we had penned on a whiteboard 18 months ago, when, as a company responsible for the security underpinning over a billion smartphones, we turned our attention to IoT.

The first and most obvious requirement is simplicity. SSL is an extremely complex system—but it makes it easy for anyone writing a web application to “be secure”—and for anyone using that web application to believe that it is safe. As we have all found, having a strong front door like SSL is not enough—plenty of other things need to be correct to make a system truly secure—but the principle still holds. Security may be a complex subject, but it must be tamed.

One approach is to adopt web standards, such as SSL. That is fine as far as it goes, but SSL and its companion PKI don’t really fit the embedded world very well. An average microcontroller might have 16-64kb of flash for everything. SSL will happily eat that several times over. SSL is also a “channel-based” security scheme. It secures a pipe between a client, such as a web browser, and a server. The two ends then “chat” to establish a secure connection. For some embedded applications, this fits just fine. But for others, there may never be such an end-to-end, two-way pipe. Devices may need to send messages which are collated and eventually forwarded—more of a letter than a phone call.

Aping SSL is certainly an option for some applications, but it is also overkill—SSL supports the need for a browser to securely connect to any server in the world. Embedded devices don’t need that flexibility and, therefore, simpler and far smaller approaches are possible.

In the end, core security can be whittled down to a very small API. Essentially: “encrypt a message to give to Bob” and “decrypt a message from Bob.” Though encryption is only one aspect of security, it is the essence that developers can understand and then use to start to make their systems secure. As with SSL, the security experts behind the scenes can worry about the technobabble of integrity, forward secrecy, impersonation replay, and all the other tricky details.

The second security requirement is to prove that a device—and a message from it—is legitimate. This takes a little more understanding but is absolutely key and unlocks the potential value of security to many device makers.

Imagine that you produce a widget that is going to connect to the Internet. This might be a fitness band or a smart meter. You build this device, sell it and are successful. Now the “bad guys” smell that success and they decide to get in on the act. Maybe they steal your plans. Maybe they get someone in your supply chain to overproduce a key component. By whatever means, they create a counterfeit. To rub salt into your wounds, this counterfeit device will still connect to your servers. You can’t tell the difference between the real and the fake bits on the wire, so you end up supporting customers who have never even bought your product (and have your reputation ruined when fake devices start to fail, and you get the blame). When you consider sensors that produce faulty—or deliberately wrong—readings, the implications are enormous. Having a securely delivered message from an imposter is of no use to anyone.

The need for device legitimacy is fundamental to a secure Internet of Things environment. One method of creating legitimacy of the device is to ensure that each SoC comes with the ability to generate secure messages that can be attested to have come from a secure part. Further, a simple and low cost means of adding a digital “sticker,” also known as a Digital Hologram™, to each device during manufacture can mark that it has passed through a particular factory or stage. Like physical holograms, a digital version cannot be copied or removed, so that each device carries with it a digital ledger, recording its life—SoC to Module; Module to Device; Device through QA… and so on—service records, purchase, end of life—to record whatever information is needed to distinguish between legitimate and illegitimate devices.

The final requirement brings this all together. As with security, trust has an emotional sense—the ability to believe in something. By giving devices both a means to prove that they are legitimate and simple security tools to allow developers to deliver on the key security needs, such as secure communication and storage, we can enable trust in devices. That truly enables the Internet of Things and enables device makers to focus on what they do best.

For example, a small embedded device (Nuvoton ARM m23) is switched on for the first time and seamlessly connects to an Amazon cloud. It can do this because it was securely provisioned with Digital Holograms and has a small “micro TEE” running inside a secure area on the device. At power on, the device generates an AWS enrollment (in the form of a Certificate Signing Request in this case). The library adds an attestation to prove that the request came from this device, and the request is sent to the OEM’s cloud service on Amazon. This, in turn, forwards the attestation to the security provider, which reads and decodes the holograms and returns the device history in long hand (XML). Finally, the OEM service can enroll the device, as it knows that the message can only have come from a legitimate device. This demonstrates how something deeply complicated can be made trivial—as Netscape did with SSL back at the birth of the commercial web.

Security is certainly not simple. But developing products with simplicity, legitimacy, and trust can make security easy to use.

Richard Hayton is the Chief Technology Officer of Trustonic. Prior to Trustonic, Hayton was Chief Architect for Citrix Mobility, where he was responsible for crafting the XenMobile Enterprise Mobility Suite. During almost 20 years at Citrix, he led projects ranging from embedded software to global enterprise systems, with a focus on user and developer experience. He holds a PhD in Computer Science from Cambridge University, focusing on identity federation for users, devices and services.

Spectre and Meltdown Create a Deep Threat

Friday, March 2nd, 2018

What you need to know about security flaws Spectre and Meltdown. Neither Spectre nor Meltdown are viruses, but rather exploit speculative execution techniques as vulnerabilities in the way hardware works with an OS. Although recently uncovered, both have existed for years.

Since at least 2010, commercial CPU architectures have used predictive actions to speed up processor pipelines. Speculative execution is just another technique used to improve processor performance. There’s still a need for speed, and the door is closing on Moore’s Law. Improvements in performance while reducing cost and increasing power efficiency aren’t coming as quickly as chip designers butt up against the laws of physics. Speculative, or out-of-order execution, reduces latency by operating on more than one instruction at a time, sometimes in a different order than what entered the processor pipeline.

Figure 1: Simplified drawing of a single core of Intel’s Skylake microarchitecture. Instructions are converted to microoperations (µOps) and executed out of order by individual execution units within the execution engine of the core. (Source: Lipp, Moritz, et al. “Meltdown.”)


Out-of-order execution can happen when a processor reaches for the next instruction while waiting for data to be fetched from memory for the current instruction. The processor might make a decision that depends on the state of the as-yet-unknown value of the data being fetched, yet begin to work on one or more potential outcomes in anticipation of a branch taken. If the processor finds that it went down the wrong path, it discards calculations and begins work on the correct branch or path. However, the processor often guesses correctly and has completed work on an anticipated outcome, saving time. This predictive behavior has been in use for roughly two decades and has recently come to light as an open door to circumvent security, with a large number of servers, personal computers, smartphones, tablets, and nearly every modern processor vulnerable to malware that exploits these speculative tactics. Worse yet, cloud servers are a growing service for big data and IoT and designed to run other people’s software. Given the growing number of multi-party entanglements, engineers have been working on patches for major operating systems (OSes), web browsers, and processor (hardware) microcode updates. Microcode is as close to the silicon as you can get with a downloadable patch and might be referred to as firmware by some.

Two identified security flaws exploit speculative execution, and researchers have named them Spectre and Meltdown. Neither is a virus or malware, but rather techniques that expose vulnerabilities in the way the hardware works with the OS. Both have existed for years, were recently uncovered, and yet kept secret; discussed on a need-to-know basis amongst major hardware, cloud, and software companies since around mid 2017 as engineers scrambled to create a fix from several different angles. Apple, Intel, AMD, Arm, Microsoft, Google, Qualcomm, Amazon, and Linux kernel engineers knew and were preparing patches until early January, when open source communications, being open source, couldn’t help but hint that something was up when a mysterious patch was introduced. People figured it out, and the news was released about a week before intended. No one had yet identified malware that actually uses the security flaws, thus keeping quiet about the vulnerability was one means of protecting the world against it. However, since neither flaw would show up in a computer’s security log, no one knows if either has been used against a real-world target yet. Both flaws are OS-agnostic. The Meltdown attack circumvents a central tenet of security: protection through isolation. A kernel is the core of an OS and is supposed to work in isolation from user memory, or “userspace,” which is where programs and applications operate. Meltdown allows any process running in user space to access all of a kernel’s privileged memory. Spectre uses similar mechanisms but abuses user programs.

Figure 2: The Spectre logo as portrayed on the site. (image credit: By Natascha Eibl ( [CC0], via Wikimedia Commons)

How Speculative Execution Works
The speculative execution technique is exploited by “cache timing side-channels.” Cache memory is involved, and exploiting speculative execution also involves timing cache . “Side-channels” is a term for a by-product of speculative processing. Processor speculation was introduced and became established before security was considered necessary at the bedrock physical layer, or at silicon level. Hardware has always been considered as less vulnerable than software, but as technology has become more complex, the lines have begun to blur between hardware and software. A white paper published by Arm, Cache Speculation Side-channels, details the background of speculative execution and discusses the susceptibility of Arm processors. According to Arm, by timing how long it takes for the processor to access the cache, malware could determine the addresses that have been allocated into the cache. Malware could run on the processor and issue requests that hitchhike on speculative execution. Advanced processors will speculate at least two steps ahead into a branch or decision, and this is where the breakdown occurs, since it is the second speculation activity that reveals information based upon the first speculation. Since speculation techniques use cache memory to infer at least part of the data results, the malware can eventually recover the entire memory that the system kernel can access. Accessing cache memory takes time, and by analyzing the timing of cache accesses in conjunction with information gathered by the first speculative read, Meltdown- or Spectre-based malware can gather information about the data inside the memory accessed by speculative reads.[i] Google Project Zero first identified the flaws, citing three variants. Variant 1 is a Spectre flaw that uses “bounds check bypass.” Variant 2 is also a Spectre flaw that uses “branch target injection.” Variant 3 is a Meltdown flaw that uses “rogue data cache load.”[ii]

Figure 3: The Meltdown logo as portrayed on the site. (image credit: By Natascha Eibl ( [CC0], via Wikimedia Commons)

Both Spectre and Meltdown use side channels to steal data from privileged memory locations. The difference between them is that Meltdown does it by breaking the isolation between the OS kernel and userspace. As a consequence, software running in userspace has access to system memory that it would not have without exploiting the long-existent but new-found security flaw.[iii] Malware exploiting the Spectre flaw would induce “victim” software to perform speculative operations that it would not do during normal program execution. The result is that the victim’s confidential information could be leaked using a side channel.[iv] Sadly, technologists like Simon Seggars, CEO of Arm, know that this is not going to be the last security flaw springing from how hardware works with software or on such a deep level. As Seggars stated at the Consumer Electronics Show, “The reality is there are probably other things out there like it that have been deemed safe for years.” Rapid growth in technology delivers more complexity as we build upon yesterday’s innovations yet makes life more rich and productive. However, we are our own worst enemy as those who benefit from technology also exploit it for personal gain at the expense of everyone else. The best anyone can do is make sure that OS and browser updates are allowed to take effect as soon as they are available.

Who and What are Vulnerable
Anyone using a processor with exposure to external connectivity and built around 2010 or later may be affected, as this was when speculative processing was first commercially deployed. Some sources state that speculative execution has been in use for “roughly 20 years.”[v] Malware cannot infect something that it does not have access to. An isolated processor will not be affected if it has no connection to the internet or to external devices such as USB sticks. Processor architectures vary, not all are equally exposed. For instance, AMD believes that Variant 1 can be fixed with just an OS patch for AMD processors. Additionally, they state that AMD processor architectures would make it difficult to use Variant 2 techniques, nevertheless AMD is making optional microcode updates for customers and recommends updating with an OS patch. The AMD site states that AMD processors are not susceptible to Variant 3 (Meltdown) “due to our use of privilege level protections within paging architecture.”[vi] Initial updates for Intel architectures caused some problems with older machines, but the issue has since been fixed. OS patches that make it harder to exploit the kernel have been released for Linux, Windows, and iOS. Major web browsers have had updates to address the problem, as well. Anyone with a computer, smartphone or tablet should have experienced a system update starting at the end of 2017. Many expect to see a degradation of performance with the fixes, but various combinations of OSes, processors, and browsers will create a different experience for each combination. Hackers have made use of the news by phishing and offering patches that are actually malware. The official site to learn more about the flaws and how to protect against them are found at

If the processor does not have an OS (typically microcontrollers), it is not affected by the Meltdown or Spectre flaws. Fixes for these potential malware vulnerabilities include hardening the kernel of the OS. The kernel is the root from which everything that needs an OS proceeds.  Patches for major OSes were released just before, or shortly after, this hardware vulnerability was revealed. Manufacturers of affected processors are also providing firmware update patches, although not all processors have patches yet. Some processors are deemed unaffected, such as Arm® Cortex-M™ processors. If the processor does not use the speculative technique as described, then it is not affected. AMD Radeon GPUs do not use speculative execution and therefore are not susceptible to any threat utilizing Meltdown or Spectre-based malware.


The white paper “Meltdown” covers the mechanics of the Meltdown flaw in detail, as well as means to mitigate potential attacks.[vii] A technique to randomize the location of kernel code at boot-up, called Kernel Address Space Layout Randomization (KASLR) was introduced to the Linux kernel as early as v 3.14 and enabled by default in May 2017 in Linux kernel version 4.12. KASLR also randomizes memory mapping, which helps obfuscate locations in memory, but is not bullet-proof. It is clear that patches are not the answer. Chip makers will need to re-design processors to avoid side-channel vulnerabilities. As engineers create new products, standing on the shoulders of those who came before them, technology naturally gains in complexity. It is rare for any one engineer to understand the entire picture from an integrated system point-of-view.

Engineers and IT personnel worldwide are still working hard to apply firmware updates and software patches before the flaws can be exploited. Clearly, design must take a more holistic view to marrying the security of hardware and software as an integrated entity, and the best security experts will be those who have extensive experience in both hardware and software.

Lynnette Reese is Editor-in-Chief, Embedded Intel Solutions and Embedded Systems Engineering, and has been working in various roles as an electrical engineer for over two decades. She is interested in open source software and hardware, the maker movement, and in increasing the number of women working in STEM so she has a greater chance of talking about something other than football at the water cooler.

[i] “Cache Speculation Side-Channels.” Arm, Feb. 2018.

[ii] Horn, Jann. “Project Zero.” Reading Privileged Memory with a Side-Channel, Google, 3 Jan. 2018,

[iii] Meltdown and Spectre, Graz University of Technology, Jan. 2018,

[iv] Kocher, Paul, et al. “Spectre Attacks: Exploiting Speculative Execution.” [1801.01203] Spectre Attacks: Exploiting Speculative Execution, Cornell University Library, 3 Jan. 2018,

[v] “An Update on Spectre and Meltdown.” SecureRF, SecureRF, 12 Feb. 2018,

[vi] “AMD Processor Security.” AMD, 11 Jan. 2018,

[vii] Lipp, Moritz, et al. “Meltdown.” Computer Science> Cryptography and Security, Cornell University Laboratory, 3 Jan. 2018,

Internet of Things (IoT) Security Market to grow at a CAGR of 35.21% to 2022 & reach USD 41.85 Billion by 2022

Wednesday, October 25th, 2017

The Internet of Things (IoT) security market is driven due to rising security concerns in the critical infrastructures and strict government regulations and is expected to grow from USD 7.90 Billion in 2016 to USD 41.85 Billion by 2022 at a Compound Annual Growth Rate (CAGR) of 35.21%. The year 2016 has been considered as the base year for the study, while the market size forecast is from 2017 to 2022.

Global Internet of Things (IoT) security market Segment by Manufacturers Include as Geographic Revenue Mix, Cisco Systems,Inc, BM Corporation, Infineon Technologies, Intel Corporation, Symantec Corporation, Arm Holdings PLC, Check Point Software Technologies Ltd, Trend Micro,Inc, PTC,Inc, Gemalto NV, Sophos Group PLC, Inside Secure and Wurldtech Security Technologies Inc.

Request sample Copy of this Report:

Global Internet of Things (IoT) security market Segment by Type, covers as
• Network Security
• Endpoint Security
• Application Security
• Cloud Security

Iris Recognition for Secure Digital ID

Monday, February 6th, 2017

Here’s why the biology of our irises makes them reliable, secure digital ID tools.

iris-blue-copyAfter the introduction of fingerprint scanners in mobile phones, biometrics has become a core feature of our mobile devices. Remembering, forgetting and recalling passwords is arguably one of the biggest pain points in digital lives, and biometrics is perhaps the easiest way of addressing it. There’s no need to remember something that consumers always have, such as fingers, eyes, face etc. While biometric technology based on fingerprints was the first to find widespread use, of late, building on the success of fingerprints, more mobile phones are using the iris in their latest models.

Fujitsu/NTTDOCOMO launched the world’s first mobile phone with iris recognition capability, the F-04G, in mid 2015, followed by F-02H in winter of 2015. Other mobile devices with this feature include the Lumia 950 and 950XL, HP Elite X3 (Figure 1), Fujitsu’s F-02 tablet, and the ill-fated Samsung Note 7. The reason for the interest in the iris is primarily better reliability compared to even fingerprints, as well as higher security. Both these advantages stem from the iris’s biological characteristics.

Intricate and Yours for Life

The iris is the doughnut-like structure around the pupil of the eye. A muscle, the iris controls the size of the pupil to control the amount of light that can enter the eye. Like any of our bodies’ other muscle structures, the iris has a rich and unique pattern. This unique iris pattern, as with fingerprint patterns, is what computer algorithms use to derive a unique identity for each iris and associate it with the identity of the individual.

The iris pattern is even more complex (richer) than any fingerprint pattern, so it has more information content, which translates into more entropy and a higher level of security. Think of how six-digit passcode security compares to four-digit passcode security. The iris gets its color from a pigment called melanin, and this pigment has a different color for different people. The iris is formed even before a baby is formed during the gestation period and remains the same for life. As an internal organ, the iris is completely covered by a transparent layer called cornea, a feature which makes the iris a more stable and reliable biometric modality.

Figure 1: The HP Elite x3 has an integrated iris scanner. [Source: Maurizio Pesce, Milan, Italy]

Figure 1: The HP Elite x3 has an integrated iris scanner. (Source: Maurizio Pesce, Milan, Italy)

Overcoming Challenges

The Unique Identification Authority of India (UIDAI) has created the world’s largest biometric based citizen and resident authentication system based on the iris. During its initial pilot studies the UDAI verified:

  • The iris does not get worn out with age, or with use.
  • Weather changes do not affect Iris authentication.
  • Iris image capture does not require physical contact. Capturing the iris image is physically similar to the familiar practice of taking photographs.
  • Iris capture requires simple instructions such as, “Look at the camera; keep your eyes wide open.”
  • A fake iris is difficult to synthesize, making impersonation harder
  • The Iris image cannot be captured without the individual’s cooperation
  • The spread of low-cost consumer cameras has aided iris camera costs and manufacturing.

While these pilots and subsequent tests done by UIDAI have confirmed the advantages of the iris, early adopters of iris-enabled mobile phones have also reported some problems:

  • Difficult use under direct sunlight
  • Difficulty detecting the iris when using certain kinds of glasses
  • Difficulty detecting the iris while the user is moving

Despite these problems, iris biometric technology is favored over fingerprint biometric technology, which is being found to be unreliable depending on the individual’s age, occupation and other external conditions. Many young people have soft skins with wrinkles that affect scanning, and older people tend to have dry and brittle skin that does not have the appropriate contact for scanning. People involved in manual labor such as construction workers and farmers end up damaging their fingerprints. Additionally, fingerprints are easily left behind on devices and other objects we touch, which can make it easier for sophisticated adversaries to steal them.

One company which is taking on some of the challenges associated with iris biometric technology is Delta ID. The company’s AvtiveIRIS® technology includes advanced algorithms to compensate for these challenges and provide users with an easy to use, secure iris recognition system that can work for mobile users across age groups, occupations, and usage conditions. The Delta ID ActiveIRIS software compensates for the motion blur that is introduced when the user is moving, occlusion of the eye by the eye lashes under direct sunlight or by reflections on the glasses, and many more usage scenarios.

Research and Markets predicts the global iris recognition in access control market (authentication, biometrics, cards, touch screens) to grow at a CAGR of 18.09% during the period 2016-2020[1].

Figure 2: “Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience.”

Figure 2: “Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience.”

The higher security and reliability of the iris has significant appeal to multiple applications and services spanning multiple vertical markets. On mobile devices one of the primary adopters of biometrics has been for mobile payments and banking. The success of mobile enabled financial applications hinges on the usability and security of the biometric modality used for authentication. Performing better than fingerprints on both those fronts, iris biometric technology is expected to see more and more adoption in the near future. In the automotive sector, we’re seeing interest in iris biometric technology for driver identification and driver monitoring. Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience (Figure 2). Driver identification can then be used for multiple use cases—in-cabin customization, security, pay-as-you-go insurance plans, auto enabled payments—and at gas stations, parking lots, drive through restaurants, and more.

The applications of this technology can be endless once consumers recognize the superior user experience and security.

Salil_PrabhakarDr. Salil Prabhakar is President and CEO of Delta ID Inc., a California technology company he co-founded in 2011.

He is an expert in the area of biometric fingerprint and iris scanning technology. Dr. Prabhakar has co-authored 50+ publications (14,000+ Google Citations), two editions of the award-winning Handbook of Fingerprint Recognition, five book chapters, and eight edited proceedings. He has several patents granted and pending. He has served as an Associate editor of IEEE Trans. on Pattern Analysis and Machine Intelligence, SPIE Journal of Electronic Imaging, EURASIP Journal of Image and Video Processing, Elsevier Pattern Recognition, and Current Bioinformatics. He was lead guest co-editor of April 2007 IEEE Transactions of Pattern Analysis and Machine Intelligence Biometrics Special Issue. He has been a co-chair/program chair for 10+ IEEE, IAPR and SPIE conferences, was general co-chair of the 5th International Conference on Biometrics in 2012 in New Delhi. He was VP Finance of IEEE Biometrics Council during 2010-2012.



Improving Security with Bluetooth Low Energy 4.2

Monday, September 12th, 2016

With version 4.2, Bluetooth Low Energy (BLE) offers new features to enhance privacy and security to address vulnerabilities of earlier versions of BLE as well as to improve energy efficiency.

Protecting a user’s private information is important for every wireless device, from fitness monitors to payment systems. Privacy mechanisms prevent devices from being tracked by untrusted devices. Secure communications keep data safe while also preventing unauthorized devices from injecting data to trigger unintended operation of the system.

To maintain the privacy of the BLE devices, trusted BLE devices use a shared secret called the Identity Resolving Key (IRK) to generate and resolve a random address known as the Resolvable Private Address (RPA). Only if a device has the advertising device’s IRK can it track the movement of the advertising BLE device.

The IRK is shared between devices at the time of pairing and is stored in the internal memory of the devices during bonding in a list called the Resolving List. Thus, devices that have bonded earlier can resolve a peer device’s private address.

In Bluetooth 4.1, the Resolving List is maintained in the Host. Address resolution is also done by the Host. This requires Host intervention every time an advertisement packet with an RPA is received. In Bluetooth 4.2, the Resolving List is maintained in the Controller. Since the Controller resolves the private address, the Host does not need to wake up in devices where the Host is implemented using a separate CPU. This lowers overall power consumption. Even in the devices where both the Controller and the Host are implemented using the same CPU, power consumption shrinks as the address does not need to go through the various protocol layers, thus reducing the number of CPU cycles needed to resolve the address.

The RPA can also be changed over time, making it more difficult to track a private device. In the case of Privacy 1.1 (Bluetooth 4.1), the recommended RPA timeout is 15 minutes. However, such privacy had limited usability because of the impact on connection time and power consumption. In addition, features like device filtering and Directed Connectable Advertisement (DCA) cannot be used in Privacy 1.1 while using RPA, as the address cannot be resolved at the Link Layer.

Privacy 1.2 in Bluetooth 4.2 allows for an RPA timeout period from 1 second up to 11.5 hours. As BLE 4.2 supports address resolution at the Link Layer, DCA can be used to speed connection between devices and consume less energy doing so.

Passive Eavesdropping
To protect communications from unauthorized access, wireless systems must prevent passive eavesdropping and man-in-the-middle (MITM) attacks. In passive eavesdropping, a third device quietly listens to the private communication between two devices (see Figure 1). Protection against passive eavesdropping is important in applications like payment solutions where confidentiality of information like passwords is of utmost importance.

Figure 1: In a passive eavesdropping attack, a third device listens in on the communications between two devices.

Figure 1: In a passive eavesdropping attack, a third device listens in on the communications between two devices.

Systems can protect against passive eavesdropping by using a key to encrypt data. LE Secure Connections, introduced in Bluetooth Low Energy 4.2, uses the Federal Information Processing Standard (FIPS) compliant Elliptic Curve Diffie-Hellman (ECDH) algorithm for key generation (Diffie-Hellman Key—DHKey). This key is used to generate other keys like Long Term Keys (LTK) and DHKey but is itself never shared over the air. As the DHKey is never exchanged over the air, it becomes very difficult for a third device to guess the encryption key. Earlier versions of Bluetooth Low Energy (Bluetooth 4.1 or older) devices used easy-to-guess Temporary Keys (TK) to encrypt the link for the first time. Long Term Keys (LTK), along with other keys, were then exchanged between devices over this encrypted but potentially compromised link.

MITM is a scenario where, as two devices try to communicate with each other, a third device inserts itself between them and emulates both devices to the other (see Figure 2). Authentication protects against MITM by ensuring that the device a system is communicating with is actually the intended device and not an unauthorized device emulating as the intended one.


Figure 2: During a Man-in-the-Middle attack, a third device inserts itself into a connection and emulates both devices as if they are directly connected.

In Bluetooth, an association model is a mechanism that two devices use to authenticate each other and then securely exchange data. In Bluetooth, pairing is the process of key exchange. However, before keys are exchanged, both devices must share pairing parameters that include authentication requirements. If authentication is required, both devices must authenticate each other using one of the association models.

Which model to use is based on three parameters:

  1. Is MITM protection required?
  2. Can the device receive data from a user (such as a button or keyboard) or output data to the user (such as an LCD display capable of displaying a 6-digit decimal number)? Involving the user in the pairing process is an important element in the secure transfer of data
  3. Can the device communicate Out-of-Band (OOB)? For example, if part of the security key can be transferred between the two devices over Near-Field Communication (NFC), an eavesdropper will not be able to make sense of the final data.

Four association models are available in BLE 4.2:

  1. Numeric Comparison—Both devices display a six-digit number and the user authenticates by selecting ‘Yes’ if both devices are displaying the same number. This association model is introduced in LE Secure Connections in Bluetooth 4.2. With legacy pairing (Bluetooth Low Energy 4.1 or older), these IO capabilities would have led to a Just Works association model (unauthenticated).
  2. Passkey Entry—The user either inputs an identical Passkey into both devices, or one device displays the Passkey and the user enters that Passkey into the other device. Exchange of the Passkey one bit at a time in Bluetooth 4.2 is an important enhancement over the legacy Passkey entry model (Bluetooth 4.1 or older) where the whole Passkey is exchanged in a single confirm operation. Such bit-by-bit disclosure enforces not more than two bits of unguessed Passkey to leak before the protocol fails the pairing procedure.
  3. Out of Band (OOB)—The OOB association model is the model to use if at least one device with OOB capability already has cryptographic information exchanged out of band. Here, protection against MITM depends on the MITM resistance of the OOB protocol used for sharing the information. In BLE 4.1 or older (legacy pairing), both devices needed to have OOB capabilities for the OOB association model to be used.
  4. Just Works—This association model is used either when MITM protection is not needed or when devices have IO capabilities as mentioned in Table 1.

Table 1 shows the association model that can be used based on the IO capabilities when LE Secure Connections is used for pairing. However, when either MITM protection is not required or OOB data is available with any of the BLE devices, IO capabilities can be ignored.


Table 1: The appropriate association model to use depends upon the I/O capabilities of the two devices.

Bluetooth Low Energy 4.2 has three association models that protect against MITM and one for applications that don’t need MITM protection. The Numeric Association model is not available in BLE version 4.1 and older. This leaves only the Passkey entry association model for authenticated pairing if OOB data is not available. The Passkey association model requires a keypad to be available to enter the passkey, and that may not be possible in many systems, limiting the use of this MITM protection capability. However, the Numeric comparison can be used when just a yes/no option is available with display capabilities, thus extending MITM protection capability to more applications.


Pairing is the process of key exchange and authentication. There are two types of pairing that are dependent on Bluetooth Low Energy version: LE Secure Connections (introduced in Bluetooth 4.2) and LE Legacy Pairing (supported in Bluetooth 4.0 onwards). LE Secure Connections significantly improves security over previous versions.

Pairing in Bluetooth Low Energy is divided into three phases. During the first phase, devices exchange their pairing parameters. Pairing parameters are the capabilities and security requirements that are used to determine the association model to be used. The pairing parameters consist of various fields, as shown in Figure 3.


Figure 3: Pairing parameters exchanged during phase 1 of pairing in BLE 4.2.

LE Secure connection uses a Federal Information Processing Standard (FIPS) compliant Elliptic Curve Diffie-Hellman (ECDH) algorithm that allows devices to establish a shared key between two devices over an unsecured channel. The form of ECDH used is P-256, which implies that the private key generated by the devices is 256 bits (or 32 bytes) in length.

Prior to executing the ECDH algorithm, both devices must settle upon a certain set of domain parameters. In the case of LE Secure connection, both devices know the parameters by default as they are following the FIPS compliant P-256 ECDH mechanism. After this, each device generates a pair of keys. The first is called a private key, which the device never shares or sends over the air. The second is called the public key. This is generated from the device key and a generator function that is a part of the domain parameter.

After this, each device sends its own public key to the other device. Using the public key received from the other device, its own public key, and its own private key, both devices are able to generate a shared key. Note that a passive eavesdropper can only sniff the public key exchanged between the devices. Without one of the private keys, it cannot generate the shared key that is used for further encryption. In this way, ECDH can generate a shared key over an insecure channel and encrypt the link.

Figure 4 shows how two devices can establish a shared secret when a third device is listening to the communication between them.


Figure 4: Establishment of a shared secret when a third device is listening.

In phase 2, the ECDH key pair is generated and public keys are shared to authenticate devices and establish link encryption. To ensure that the device with which a device is communicating is the intended device, authentication is performed using one of the association models. The devices generate a Long Term Key (LTK) from the shared key of the ECDH process and proceed towards the second stage of the authentication check, which involves checking the DHKey.

In phase 3, the LTK is used to encrypt the link. Once the link is encrypted, keys are shared as indicated by Initiator Key Distribution/Responder Key Distribution flags in the pairing parameters (e.g., the IRK that is needed if the RPA is used).

Data Signing
Data signing is another feature available in BLE that helps add another level of security. BLE can use a Connection Signature Resolving Key (CSRK) to authenticate data when encryption is not used. To do this, a signature is generated using the signing algorithm and a counter. The counter is incremented with each Data PDU to avoid any replay attack. Note that Data Signing does not protect against passive eavesdropping. Rather, it verifies to the receiving device the authenticity of the device from where the data originated.

Bluetooth Low Energy 4.2 offers strong security mechanisms to enable secure wireless systems. Although BLE 4.1 and 4.2 offered features to guard against MITM, a truly secure BLE system can be implemented only using Bluetooth 4.2. When LE Legacy pairing is used (Bluetooth 4.1), only the OOB association model protects against passive eavesdropping. Bluetooth 4.2 also includes an additional association model called Numeric Comparison and uses an Elliptic Curve Diffie-Hellman algorithm to ensure privacy and data security.

For more details on Bluetooth 4.2 privacy and security features, see application note AN99209 or the Bluetooth Core Specification for more details on Bluetooth 4.2 features.

sachin-author-smSachin Gupta is a Staff Engineer, Product Marketing with Cypress Semiconductor. He loves working on different types of analog and digital circuits, as well as synthesizable codes. He holds a diploma in Electronics and Communications from Vaish Technical Institute and a Bachelors in Electronics and Communications from Guru Gobind Signh Indarprastha University, Delhi. He has eight years experience in SoC applications. He can be reached at

richa-author-smRicha Dham is Sr Marketing and Applications Manager with Cypress Semiconductor. Her interests lie in defining new solutions especially in the connectivity and IoT area. She completed her Masters in Technology in Communications Engineering from Indian Institute of Technology, Delhi (IITD).

Compute-Storage Proximity: When Storage at the Edge Makes More Sense

Monday, June 27th, 2016

Why it’s time to take a new look at high-performance storage at the edge.

When is localized storage more viable than a cloud solution? It’s a considerable debate for many organizations, and requires a thoughtful, case-by case examination of how stored data is used, secured and managed. Cloud migration may or may not be ideal, depending on considerations such as data accessibility, control and ownership. Data-intensive applications close to the source have great impact on how and why storage choices are made — considerations include mission-critical data sensitive to latency and bandwidth, as well as compliance and privacy issues.

A healthy dose of skepticism about what can be shipped to a public cloud, or even kept in a hybrid environment, must be balanced with a better understanding of how on-premise options have evolved. High-performance storage close to the compute source is not only relevant, but also much less complex and costly than it once was, thanks in large part to the emergence of software-defined storage. For system engineers, all these factors are driving the need for a smarter, more strategic look at storage options.

Figure 1: Storage management and labor costs are typically the largest single factor of TCO of on-site storage solutions. The advent of software-defined storage reduces barriers to deployment by reducing these costs dramatically.

Figure 1: Storage management and labor costs are typically the largest single factor of TCO of on-site storage solutions. The advent of software-defined storage reduces barriers to deployment by reducing these costs dramatically.

Storage at the Edge Competes on Cost and Complexity

High capacity storage is often pushed to the cloud because of a cost advantage. However, today that may be more of a perception than a reality—the landscape is changing dramatically with the advent of software-defined storage or hyperconvergence. The software-defined architecture hides the complexities of managing conventional storage and therefore helps reduce costs. Users can deploy and manage such systems themselves, reducing the need for a well-qualified IT resource dedicated to full-time management of terabytes or petabytes of data. This removes a significant barrier to deployment, given that historically as much as 70 percent of TCO would have been related to staff and physical labor required to maintain conventional storage on-site.

Even more compelling is the way software-defined storage eliminates the focus on the underlying storage hardware itself. System engineers must only define parameters that optimize their application, such as storage requirements and response time. The software then allocates existing resources, determining application needs and where best to store data. This may include any combination of disk types, such as spinning drives, solid-state or flash, or even tape. Adding nodes or replacing SSDs has no impact on the upper level application; with redundancy built in, the system will automatically switch to other resources without disrupting performance.

This is a critical advancement—in the past, system engineers would have had to ensure system or application code was configured to rely on a particular type of storage. Without a strong understanding of storage technologies, systems could become unbalanced based on under- or over-provisioned hardware assets. Now, software-defined advancements allow optimized, scalable storage solutions; this enables engineers to consider how they want to manage storage, rather than which individual pieces of hardware they need.

Defining Performance is Step One

To determine storage requirements and how they can be ideally managed, system engineers must of course understand the performance needs of their application. This sounds obvious, but is not necessarily an easy thing to accomplish. As storage system engineers try to create an optimized storage solution—a daunting task unto itself—primary performance factors need to be evaluated one by one. For example, storage capacity represents a benchmark, but it is really only one piece of the puzzle. How the solution performs in terms of latency, throughput, data integrity and reliability, are equally as critical as raw capacity.

Figure 2: Data storage is not one-size-fits-all, and is more than ever focused on optimizing application performance based on how data will be accessed, secured, upgraded and managed. Software-defined storage offers a range of flexible, high-performance options that reduce both complexity and cost.

Figure 2: Data storage is not one-size-fits-all, and is more than ever focused on optimizing application performance based on how data will be accessed, secured, upgraded and managed. Software-defined storage offers a range of flexible, high-performance options that reduce both complexity and cost.

Consider an application such as training and simulation, by definition intended to mirror a real-life training scenario through high-resolution video and graphics. Or an even more critical application such as real-time situational awareness, where decisions are made quickly based on accurate and timely information. Protecting against latency in these settings is a central concern, yet can be jeopardized without assessing the need for compute-storage proximity. Data that resides farther from its compute platform simply takes longer to arrive and feed the system; this model may not be reliable enough to ensure users get a fully responsive experience given the unpredictable nature of remote network connections. When the potential for delay can’t be tolerated by such applications, localized storage becomes the option of choice.

Considering Mission-Critical Data

The same type of assessment must be applied to a slate of other considerations, such as the need for control and ready access to stored data, or enhanced data privacy and security. Cloud hosting inherently creates a lack of control, and also relies on network availability and bandwidth which could be deemed a single point of failure. These may be unacceptable considerations if data is mission critical or impacts revenue. In other scenarios, for example applications such as healthcare and financial services, data environments are mandated to meet specific regulatory compliance requirements. Here a localized storage solution will more readily demonstrate the required data security and access control, helping engineers gain compliance ratings and customer confidence.

Because cloud computing is built on a virtualized platform, the actual place where data is stored or is in motion is also at times difficult to identify and trace. Despite cloud security advancements, security threats or data spills can be better managed with local data behind the fire-wall. Some data types are even prevented from crossing geographic borders, ruled by regulations meant to address data protection and ownership.

Insights at the Edge

The IoT’s growth has operational data being generated at an exponential scale, offering a new kind of business value enabled through real-time analytics. Uncovering customer usage models or equipment maintenance requirements—these and other crucial business insights largely depend on the speed with which data can be collected, analyzed and acted upon. Yet even with compression, big data is everywhere, with applications such as genome sequencers or even high-definition cameras producing a few megabits of data every second. In these industrial and other mission-critical applications, it may not make sense to move every piece of data to the cloud. Transporting such large data sets requires bandwidth with high QoS, racking up unnecessary costs quickly.

Instead, a localized storage strategy efficiently supports computationally intensive operations performed at the edge. Only analytics results are shared to the cloud, rather than moving all the data under costly bandwidth requirements.

A Changing Perspective

With the emergence of software-defined storage, the on-site vs cloud perspective has shifted. System engineers can now focus on how they want to serve clients and meet service level agreement (SLA) requirements, rather than on the storage and underlying hardware itself. Offload application data to the cloud and trust that it is going to work? Or keep it close at hand, hide the complexity and retain greater control? The answer lies in careful evaluation of your specific application’s needs for accessibility, control, data ownership, security and more. Compelling, cost-effective advantages can be gained with storage close to the source—such as reducing maintenance staff, improving data reliability and security, and addressing physical challenges of latency and bandwidth in transmitting data. For meaningful, time-sensitive analytics, it is vital that both the application and high performance storage capacity are in close proximity.

Organizations are finding it difficult to store and manage growing volumes of data. It’s a rising challenge with great impact on embedded design, given that embedded systems are generally where the data is being generated. Yet moving everything to the cloud may not be the only answer. It will be many years before we come close to knowing if that is even possible. The more likely reality is that the need for high-performance, localized storage will increase in step.

Finding Value in Software-Defined Storage

In traditional, conventional storage, a person or an application needs to be aware of all the specific hardware components. In the simplest terms, software-defined storage is a layer of abstraction that hides the complexity of the underlying compute, storage, and in some cases networking technologies.

In the software-defined model, storage systems are virtualized, pooled, aggregated and delivered as a software service to users. An organization then has a storage pool, created from readily available COTS storage components that offer longer life cycle by lowering OpEx and TCO over time. Software-defined systems even enable the creation and sharing of storage pools from the storage that is directly attached to servers. The storage management software may add further value by hiding the underlying complexity of managing high performance and scalable storage solutions. Some also provide open APIs, enabling integration with third-party management tools and custom-built applications.

Figure 3: Avoiding latency is essential, as users may not get a fully responsive experience from data that resides too far from its compute source.

Figure 3: Avoiding latency is essential, as users may not get a fully responsive experience from data that resides too far from its compute source.

Evolution continues, and within the software-defined storage market, there’s a general movement called “software-defined” everything. For example, there is software-defined networking or software-defined virtual function. Hyperconvergence is a follow-on trend, which essentially converges compute, storage, virtualization, networking and bandwidth onto a single platform and defines it in a software context. The software handles all underlying complexity, leaving only simple tasks for administrators to manage, and for clients to be served in a transparent and highly efficient manner.

Bilal-Khan_Dedicated-ComputingwebBilal Khan [] is Chief Technology Officer, Dedicated Computing. Khan spearheads technology innovation, product development and engineering, and deployment of the company’s systems and services strategy. Dedicated Computing supplies embedded hardware and software solutions optimized for high-performance applications, with expertise in data storage, embedded design, security tools and services, software stack optimization and consulting, and cloud business infrastructure design and management.

Virtual Prototyping and the IoT: A Q&A with Carbon Design Systems

Thursday, March 12th, 2015

A designer of high-performance verification and system integration products highlights the virtual prototyping/IoT relationship and explains the more prominent part pre-built virtual prototypes now play.

Editor’s note: Our thanks Bill Neifert, chief technology officer of Carbon Design Systems and a Carbon co-founder who recently offered his insights on a number of questions.

EECatalog: How much embedded design activity are you seeing for IoT?

bill_neifertBill Neifert, Carbon Design Systems: IoT is an intriguing space since exactly what this means varies from source to source. Regardless though, the fundamental premise of IoT is taking a device and placing it on the Internet. While this seems simple enough at first, it introduces substantial design complexity along with a lot of additional potential. Any device connected to the Internet needs to be able to interact securely, which requires a good amount of design effort and proper design practices. In addition, since connected features are seen as a means for differentiation, it’s common for the connected capabilities to be leveraged to integrate other features such as remote controllability and notification. All of this complexity drives a need for additional embedded development.

EECatalog: How can IoT designers take advantage of virtual prototyping, especially when security is a consideration?

Neifert, Carbon Design Systems: Virtual prototypes add value to IoT development in multiple ways. Since IoT devices typically are more consumer-focused, time to market is often a key way to differentiate. Virtual prototypes are able to pull in design schedules by parallelizing the hardware and software design efforts. In addition, since security plays a big role in IoT, accurate virtual prototypes can help ensure that all of the system’s corner cases have been validated early in the design.

EECatalog: What trends are you seeing that will affect embedded designers in 2015?

Neifert, Carbon Design Systems: The primary trend that will continue to affect embedded designers in 2015 is the mass migration of designs to the ARM architecture. ARM has long been the dominant player in the mobile space. In the past few years though, it has achieved significant penetration into other market verticals. We’ve seen strong adoption of ARM processors in a number of areas, winning design starts away from both internal offerings as well as other processor IP vendors. We’re seeing this trend reflected in our own customer base. A year ago, the majority of our virtual prototypes were used in SoC designs focused on the mobile space. While mobile is still a widely used application, in just the past 12 months, we’ve seen new ARM-based design starts in servers, base-stations, storage, sensors and industrial applications. In many cases, this is the first time that the design team is using ARM IP. If you’re an embedded designer and your current project isn’t using an ARM processor, there’s a good chance that you’ll use one on your next project.

EECatalog: How has virtual prototyping changed the way in which embedded designers work?

Neifert, Carbon Design Systems: In the past few years, multicore designs have become far more prevalent. Although this puts a lot more power into the hands of the designer, it also introduces substantial complexity from both the hardware and software design perspective. Virtual prototypes empower designers with the tools to handle this complexity. Accurate virtual prototypes enable architects to ensure that the performance goals of the chips used to drive embedded designs are being met. Validation and verification engineers are leveraging virtual prototypes to ensure that the interactions between system software and the hardware being designed are correct. Finally, firmware and software engineers are able to leverage the early availability, speed and visibility of virtual prototypes to design software earlier and debug problems that would take much longer to isolate in real hardware. There’s no way for real silicon or even hardware prototypes to match the visibility and debuggability, which is standard in a virtual prototype.

Recently, pre-built virtual prototypes such as Carbon Performance Analysis Kits (CPAKs) have been playing a much larger role in the embedded design process. These pre-validated systems come complete with all of the hardware and software models needed to be productive. Designers are typically up and running within minutes of download. The system serves as a great starting point to accurately model the performance of the embedded design long before it is built. Since pre-built virtual prototypes support both 100 percent accurate execution as well as 100 MIPS performance, they can be used by all of the design teams in an embedded design. They enable embedded designers to spend less time creating a virtual prototype and more time using that virtual prototype to be productive.

anne_fisherAnne Fisher is managing editor of Her experience has included opportunities to cover a wide range of embedded solutions in the PICMG ecosystem as well as other technologies. Anne enjoys bringing embedded designers and developers solutions to technology challenges as described by their peers as well as insight and analysis from industry leaders. She can be reached at

Authentication Using SRAM Physically Unclonable Function In FPGA Optimizes M2M Network Security

Wednesday, March 11th, 2015

Together, the PUF and PKI establish a strong identity and association for every machine and their communication in the virtual private network, ensuring they are protected and can be used in M2M and IoT applications safely, securely, and with confidence.

The number of devices capable of machine-to-machine (M2M) communication is exploding across many types of networks, and nearly all associated traffic is vulnerable to malicious monitoring and modification. Communication must be secured against these attacks if the connected machines are to be used safely and with confidence. Among available security services, authentication is the most important for M2M communication security, and is most effective when a physically unclonable function (PUF) is used to compute the private key for identifying a device. Also critical is the use of a public key infrastructure (PKI) for distributing and ensuring associated public keys are authentic.

Importance of Asymmetric Authentication Using a PUF
The two security services to consider are confidentiality and authenticity. While confidentiality typically uses encryption to protect information from being learned by unauthorized viewers, authenticity goes further to ensure the message arrived intact from a known (trusted) source with no undetected errors. Although authenticity isn’t as well known as confidentiality it is the superior cryptographic service for M2M communication. For example, in establishing a secure source of network time for synchronization, it is far less important to hide the payload (since the correct time is not a secret) than it is to make sure that the received time value is tamper-free and originated from a trusted source.

The best authentication approach uses asymmetric, rather than symmetric, cryptography to verify a message’s true source through the application of a secret key. In symmetric cryptography, the sender and receiver machines share a secret key, which doesn’t work well for larger networks—either the same key must be used by all the nodes, which presents an unacceptable security risk, or different keys must be used between each device pair, which is unwieldy because the number of keys can grow exponentially with the number of nodes.

In contrast, asymmetric or “public key” cryptography employs a private key that the sender uses to digitally sign outgoing messages and is known only to that node. Verifying the digital signature using the sender’s associated public key proves message authenticity. Only one device has to store the secret (private) key and any number of devices can use the public key, which can be transmitted with the message and needn’t be confidential or stored permanently by recipients. Only breached devices must be removed from the network and refitted with replacement key pairs, and the overall system scales linearly with the number of nodes—a big improvement over symmetric key systems. Hybrid schemes deliver the same overall effect using a private key to establish an ephemeral symmetric session key and symmetric MAC tag for authentication, and are usually more computationally efficient.

The best private key is computed from a PUF, which is based on a physical characteristic that is created, effectively unintentionally, during a device’s manufacture. This characteristic is unique for each copy of the device due to small (often atomic scale), uncontrollable (and therefore impossible to clone) and yet measurable random manufacturing variations. Their measurements are analogous to a fingerprint or other ‘biometric’ and can be used to construct a private key specific to that particular device.

Advantage of SRAM PUFs
One of the best characterized and reliable types of memory PUF is an SRAM PUF. Created on a smartcard chip, FPGA or other IC, it works by measuring the random start-up state of the bits in a block of SRAM. Each SRAM bit comprises two nominally equal—but not completely identical —cross-coupled inverters. When power is first applied to the IC, each SRAM bit will start up in either the “one” or “zero” state based on a preference that is largely “baked in” during IC manufacturing (Figure 1).
In exceptionally well-balanced inverters, thermal noise may cause the bit to occasionally overcome the baked-in preference and start up in the opposite state, but the preference generally overcomes any dynamic noise. Plus, noise due to temperature, lifetime and other environmental factors is accounted for through error correction techniques, ensuring that noisy bits can be corrected and restored to values that were recorded at PUF enrollment. That way, the same key is reconstructed at each turn-on.

Figure 1. SRAM bits are comprised of two nominally identical cross-coupled inverters. Small random variations that are “baked in” during manufacturing cause each bit to have a preferred start-up state that is used to compute a secret key.

The SRAM PUF can be designed to guarantee perfect key reconstruction over all environments and its full lifetime with errors as low as one per billion. This infrequent failure is detectable with high probability, and all that is usually required is to try again to get the correct key. Additionally, protection of the SRAM PUF’s secret key is particularly strong because when power is off, the SRAM PUF’s secret effectively disappears from the device, and if the activation code (error correction data) is erased, the PUF secret key cannot be reconstructed no matter how thoroughly the device is subsequently analyzed.

Distributing Public Keys and Ensuring Their Authenticity
In addition to using asymmetric authentication with PUF-based private keys, a PKI should be used to distribute and ensure the associated public keys are authentic. In a PKI, a certificate authority (CA) certifies all the approved devices that belong to the network by digitally signing their public keys using the CA’s own private key.

If the message has been tampered with, its digital signature will not verify correctly. To ensure the public key is authentic, the recipient also has to check the CA’s digital signature on the certificate using the CA’s public key, which is generally pre­placed in every device by the manufacturer or network operator, and is inherently trusted. This creates a hierarchical certificate-based chain-of-trust within which the identity of every legitimate machine in the network can be known with very high assurance. Messages can be attributed to these machines with high confidence, and imposter machines and forged messages can easily be detected.

Implementing SRAM PUFs in FPGAs and SoC FPGAs
FPGAs and SoC FPGAs provide many benefits in M2M applications due to their inherent flexibility and high number of I/0 pins. With today’s technology, the FPGA’s SRAM PUF is used to establish a pre-configured certified identity for each device. In the case of Microsemi’s SmartFusion2 SoC FPGAs and IGLOO2 FPGAs, Microsemi acts as the certificate authority.

To implement PUF technology in an FPGA or SoC FPGA, the devices must include built-in cryptographic capabilities such as hardware accelerators for AES, SHA, HMAC, and elliptic curve cryptography (ECC), plus a cryptographic­grade true random bit generator. These capabilities can be used to create a user PKI with the user’s own certificate authority blessing each legitimate machine in the network. Each machine has a chain-of­trust from the user’s well-protected root-CA keys all the way down to the high-assurance identity established at the atomic level by the FPGA’s PUF.

richard_newellRichard Newell is senior principal product architect, Microsemi Corp., SoC Group. He plays a key role in planning the security features for the current and future generations of flash-based FPGAs and SoC FPGAs. Richard has an electrical engineering background with experience in analog and digital signal processing, cryptography, control systems, inertial sensors and systems, and FPGAs. He is an alumnus of the University of Iowa. Richard is the recipient of approximately one dozen U.S. patents, and is a member of the Tau Beta Pi and Eta Kappa Nu honorary engineering societies.

Next Page »

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.