Posts Tagged ‘top-story’

Internet of Things (IoT) Security Market to grow at a CAGR of 35.21% to 2022 & reach USD 41.85 Billion by 2022

Wednesday, October 25th, 2017

The Internet of Things (IoT) security market is driven due to rising security concerns in the critical infrastructures and strict government regulations and is expected to grow from USD 7.90 Billion in 2016 to USD 41.85 Billion by 2022 at a Compound Annual Growth Rate (CAGR) of 35.21%. The year 2016 has been considered as the base year for the study, while the market size forecast is from 2017 to 2022.

Global Internet of Things (IoT) security market Segment by Manufacturers Include as Geographic Revenue Mix, Cisco Systems,Inc, BM Corporation, Infineon Technologies, Intel Corporation, Symantec Corporation, Arm Holdings PLC, Check Point Software Technologies Ltd, Trend Micro,Inc, PTC,Inc, Gemalto NV, Sophos Group PLC, Inside Secure and Wurldtech Security Technologies Inc.

Request sample Copy of this Report: http://bit.ly/2vXqeIN

Global Internet of Things (IoT) security market Segment by Type, covers as
• Network Security
• Endpoint Security
• Application Security
• Cloud Security

Iris Recognition for Secure Digital ID

Monday, February 6th, 2017

Here’s why the biology of our irises makes them reliable, secure digital ID tools.

iris-blue-copyAfter the introduction of fingerprint scanners in mobile phones, biometrics has become a core feature of our mobile devices. Remembering, forgetting and recalling passwords is arguably one of the biggest pain points in digital lives, and biometrics is perhaps the easiest way of addressing it. There’s no need to remember something that consumers always have, such as fingers, eyes, face etc. While biometric technology based on fingerprints was the first to find widespread use, of late, building on the success of fingerprints, more mobile phones are using the iris in their latest models.

Fujitsu/NTTDOCOMO launched the world’s first mobile phone with iris recognition capability, the F-04G, in mid 2015, followed by F-02H in winter of 2015. Other mobile devices with this feature include the Lumia 950 and 950XL, HP Elite X3 (Figure 1), Fujitsu’s F-02 tablet, and the ill-fated Samsung Note 7. The reason for the interest in the iris is primarily better reliability compared to even fingerprints, as well as higher security. Both these advantages stem from the iris’s biological characteristics.

Intricate and Yours for Life

The iris is the doughnut-like structure around the pupil of the eye. A muscle, the iris controls the size of the pupil to control the amount of light that can enter the eye. Like any of our bodies’ other muscle structures, the iris has a rich and unique pattern. This unique iris pattern, as with fingerprint patterns, is what computer algorithms use to derive a unique identity for each iris and associate it with the identity of the individual.

The iris pattern is even more complex (richer) than any fingerprint pattern, so it has more information content, which translates into more entropy and a higher level of security. Think of how six-digit passcode security compares to four-digit passcode security. The iris gets its color from a pigment called melanin, and this pigment has a different color for different people. The iris is formed even before a baby is formed during the gestation period and remains the same for life. As an internal organ, the iris is completely covered by a transparent layer called cornea, a feature which makes the iris a more stable and reliable biometric modality.

Figure 1: The HP Elite x3 has an integrated iris scanner. [Source: commons.wikimedia.org Maurizio Pesce, Milan, Italy]

Figure 1: The HP Elite x3 has an integrated iris scanner. (Source: commons.wikimedia.org Maurizio Pesce, Milan, Italy)

Overcoming Challenges

The Unique Identification Authority of India (UIDAI) has created the world’s largest biometric based citizen and resident authentication system based on the iris. During its initial pilot studies the UDAI verified:

  • The iris does not get worn out with age, or with use.
  • Weather changes do not affect Iris authentication.
  • Iris image capture does not require physical contact. Capturing the iris image is physically similar to the familiar practice of taking photographs.
  • Iris capture requires simple instructions such as, “Look at the camera; keep your eyes wide open.”
  • A fake iris is difficult to synthesize, making impersonation harder
  • The Iris image cannot be captured without the individual’s cooperation
  • The spread of low-cost consumer cameras has aided iris camera costs and manufacturing.

While these pilots and subsequent tests done by UIDAI have confirmed the advantages of the iris, early adopters of iris-enabled mobile phones have also reported some problems:

  • Difficult use under direct sunlight
  • Difficulty detecting the iris when using certain kinds of glasses
  • Difficulty detecting the iris while the user is moving

Despite these problems, iris biometric technology is favored over fingerprint biometric technology, which is being found to be unreliable depending on the individual’s age, occupation and other external conditions. Many young people have soft skins with wrinkles that affect scanning, and older people tend to have dry and brittle skin that does not have the appropriate contact for scanning. People involved in manual labor such as construction workers and farmers end up damaging their fingerprints. Additionally, fingerprints are easily left behind on devices and other objects we touch, which can make it easier for sophisticated adversaries to steal them.

One company which is taking on some of the challenges associated with iris biometric technology is Delta ID. The company’s AvtiveIRIS® technology includes advanced algorithms to compensate for these challenges and provide users with an easy to use, secure iris recognition system that can work for mobile users across age groups, occupations, and usage conditions. The Delta ID ActiveIRIS software compensates for the motion blur that is introduced when the user is moving, occlusion of the eye by the eye lashes under direct sunlight or by reflections on the glasses, and many more usage scenarios.

Research and Markets predicts the global iris recognition in access control market (authentication, biometrics, cards, touch screens) to grow at a CAGR of 18.09% during the period 2016-2020[1].

Figure 2: “Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience.”

Figure 2: “Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience.”

The higher security and reliability of the iris has significant appeal to multiple applications and services spanning multiple vertical markets. On mobile devices one of the primary adopters of biometrics has been for mobile payments and banking. The success of mobile enabled financial applications hinges on the usability and security of the biometric modality used for authentication. Performing better than fingerprints on both those fronts, iris biometric technology is expected to see more and more adoption in the near future. In the automotive sector, we’re seeing interest in iris biometric technology for driver identification and driver monitoring. Unlike fingerprints, the iris-enabled identification can be touchless and seamless, adding to the in-cabin experience (Figure 2). Driver identification can then be used for multiple use cases—in-cabin customization, security, pay-as-you-go insurance plans, auto enabled payments—and at gas stations, parking lots, drive through restaurants, and more.

The applications of this technology can be endless once consumers recognize the superior user experience and security.


Salil_PrabhakarDr. Salil Prabhakar is President and CEO of Delta ID Inc., a California technology company he co-founded in 2011.

He is an expert in the area of biometric fingerprint and iris scanning technology. Dr. Prabhakar has co-authored 50+ publications (14,000+ Google Citations), two editions of the award-winning Handbook of Fingerprint Recognition, five book chapters, and eight edited proceedings. He has several patents granted and pending. He has served as an Associate editor of IEEE Trans. on Pattern Analysis and Machine Intelligence, SPIE Journal of Electronic Imaging, EURASIP Journal of Image and Video Processing, Elsevier Pattern Recognition, and Current Bioinformatics. He was lead guest co-editor of April 2007 IEEE Transactions of Pattern Analysis and Machine Intelligence Biometrics Special Issue. He has been a co-chair/program chair for 10+ IEEE, IAPR and SPIE conferences, was general co-chair of the 5th International Conference on Biometrics in 2012 in New Delhi. He was VP Finance of IEEE Biometrics Council during 2010-2012.

References:
1. http://www.researchandmarkets.com/publication/mtausix/3920634

1. http://www.researchandmarkets.com/publication/mtausix/3920634

Improving Security with Bluetooth Low Energy 4.2

Monday, September 12th, 2016

With version 4.2, Bluetooth Low Energy (BLE) offers new features to enhance privacy and security to address vulnerabilities of earlier versions of BLE as well as to improve energy efficiency.

Protecting a user’s private information is important for every wireless device, from fitness monitors to payment systems. Privacy mechanisms prevent devices from being tracked by untrusted devices. Secure communications keep data safe while also preventing unauthorized devices from injecting data to trigger unintended operation of the system.

Privacy
To maintain the privacy of the BLE devices, trusted BLE devices use a shared secret called the Identity Resolving Key (IRK) to generate and resolve a random address known as the Resolvable Private Address (RPA). Only if a device has the advertising device’s IRK can it track the movement of the advertising BLE device.

The IRK is shared between devices at the time of pairing and is stored in the internal memory of the devices during bonding in a list called the Resolving List. Thus, devices that have bonded earlier can resolve a peer device’s private address.

In Bluetooth 4.1, the Resolving List is maintained in the Host. Address resolution is also done by the Host. This requires Host intervention every time an advertisement packet with an RPA is received. In Bluetooth 4.2, the Resolving List is maintained in the Controller. Since the Controller resolves the private address, the Host does not need to wake up in devices where the Host is implemented using a separate CPU. This lowers overall power consumption. Even in the devices where both the Controller and the Host are implemented using the same CPU, power consumption shrinks as the address does not need to go through the various protocol layers, thus reducing the number of CPU cycles needed to resolve the address.

The RPA can also be changed over time, making it more difficult to track a private device. In the case of Privacy 1.1 (Bluetooth 4.1), the recommended RPA timeout is 15 minutes. However, such privacy had limited usability because of the impact on connection time and power consumption. In addition, features like device filtering and Directed Connectable Advertisement (DCA) cannot be used in Privacy 1.1 while using RPA, as the address cannot be resolved at the Link Layer.

Privacy 1.2 in Bluetooth 4.2 allows for an RPA timeout period from 1 second up to 11.5 hours. As BLE 4.2 supports address resolution at the Link Layer, DCA can be used to speed connection between devices and consume less energy doing so.

Passive Eavesdropping
To protect communications from unauthorized access, wireless systems must prevent passive eavesdropping and man-in-the-middle (MITM) attacks. In passive eavesdropping, a third device quietly listens to the private communication between two devices (see Figure 1). Protection against passive eavesdropping is important in applications like payment solutions where confidentiality of information like passwords is of utmost importance.

Figure 1: In a passive eavesdropping attack, a third device listens in on the communications between two devices.

Figure 1: In a passive eavesdropping attack, a third device listens in on the communications between two devices.

Systems can protect against passive eavesdropping by using a key to encrypt data. LE Secure Connections, introduced in Bluetooth Low Energy 4.2, uses the Federal Information Processing Standard (FIPS) compliant Elliptic Curve Diffie-Hellman (ECDH) algorithm for key generation (Diffie-Hellman Key—DHKey). This key is used to generate other keys like Long Term Keys (LTK) and DHKey but is itself never shared over the air. As the DHKey is never exchanged over the air, it becomes very difficult for a third device to guess the encryption key. Earlier versions of Bluetooth Low Energy (Bluetooth 4.1 or older) devices used easy-to-guess Temporary Keys (TK) to encrypt the link for the first time. Long Term Keys (LTK), along with other keys, were then exchanged between devices over this encrypted but potentially compromised link.

Man-in-the-Middle
MITM is a scenario where, as two devices try to communicate with each other, a third device inserts itself between them and emulates both devices to the other (see Figure 2). Authentication protects against MITM by ensuring that the device a system is communicating with is actually the intended device and not an unauthorized device emulating as the intended one.

f2

Figure 2: During a Man-in-the-Middle attack, a third device inserts itself into a connection and emulates both devices as if they are directly connected.

In Bluetooth, an association model is a mechanism that two devices use to authenticate each other and then securely exchange data. In Bluetooth, pairing is the process of key exchange. However, before keys are exchanged, both devices must share pairing parameters that include authentication requirements. If authentication is required, both devices must authenticate each other using one of the association models.

Which model to use is based on three parameters:

  1. Is MITM protection required?
  2. Can the device receive data from a user (such as a button or keyboard) or output data to the user (such as an LCD display capable of displaying a 6-digit decimal number)? Involving the user in the pairing process is an important element in the secure transfer of data
  3. Can the device communicate Out-of-Band (OOB)? For example, if part of the security key can be transferred between the two devices over Near-Field Communication (NFC), an eavesdropper will not be able to make sense of the final data.

Four association models are available in BLE 4.2:

  1. Numeric Comparison—Both devices display a six-digit number and the user authenticates by selecting ‘Yes’ if both devices are displaying the same number. This association model is introduced in LE Secure Connections in Bluetooth 4.2. With legacy pairing (Bluetooth Low Energy 4.1 or older), these IO capabilities would have led to a Just Works association model (unauthenticated).
  2. Passkey Entry—The user either inputs an identical Passkey into both devices, or one device displays the Passkey and the user enters that Passkey into the other device. Exchange of the Passkey one bit at a time in Bluetooth 4.2 is an important enhancement over the legacy Passkey entry model (Bluetooth 4.1 or older) where the whole Passkey is exchanged in a single confirm operation. Such bit-by-bit disclosure enforces not more than two bits of unguessed Passkey to leak before the protocol fails the pairing procedure.
  3. Out of Band (OOB)—The OOB association model is the model to use if at least one device with OOB capability already has cryptographic information exchanged out of band. Here, protection against MITM depends on the MITM resistance of the OOB protocol used for sharing the information. In BLE 4.1 or older (legacy pairing), both devices needed to have OOB capabilities for the OOB association model to be used.
  4. Just Works—This association model is used either when MITM protection is not needed or when devices have IO capabilities as mentioned in Table 1.

Table 1 shows the association model that can be used based on the IO capabilities when LE Secure Connections is used for pairing. However, when either MITM protection is not required or OOB data is available with any of the BLE devices, IO capabilities can be ignored.

table1

Table 1: The appropriate association model to use depends upon the I/O capabilities of the two devices.

Bluetooth Low Energy 4.2 has three association models that protect against MITM and one for applications that don’t need MITM protection. The Numeric Association model is not available in BLE version 4.1 and older. This leaves only the Passkey entry association model for authenticated pairing if OOB data is not available. The Passkey association model requires a keypad to be available to enter the passkey, and that may not be possible in many systems, limiting the use of this MITM protection capability. However, the Numeric comparison can be used when just a yes/no option is available with display capabilities, thus extending MITM protection capability to more applications.

Pairing

Pairing is the process of key exchange and authentication. There are two types of pairing that are dependent on Bluetooth Low Energy version: LE Secure Connections (introduced in Bluetooth 4.2) and LE Legacy Pairing (supported in Bluetooth 4.0 onwards). LE Secure Connections significantly improves security over previous versions.

Pairing in Bluetooth Low Energy is divided into three phases. During the first phase, devices exchange their pairing parameters. Pairing parameters are the capabilities and security requirements that are used to determine the association model to be used. The pairing parameters consist of various fields, as shown in Figure 3.

figure3_web

Figure 3: Pairing parameters exchanged during phase 1 of pairing in BLE 4.2.

LE Secure connection uses a Federal Information Processing Standard (FIPS) compliant Elliptic Curve Diffie-Hellman (ECDH) algorithm that allows devices to establish a shared key between two devices over an unsecured channel. The form of ECDH used is P-256, which implies that the private key generated by the devices is 256 bits (or 32 bytes) in length.

Prior to executing the ECDH algorithm, both devices must settle upon a certain set of domain parameters. In the case of LE Secure connection, both devices know the parameters by default as they are following the FIPS compliant P-256 ECDH mechanism. After this, each device generates a pair of keys. The first is called a private key, which the device never shares or sends over the air. The second is called the public key. This is generated from the device key and a generator function that is a part of the domain parameter.

After this, each device sends its own public key to the other device. Using the public key received from the other device, its own public key, and its own private key, both devices are able to generate a shared key. Note that a passive eavesdropper can only sniff the public key exchanged between the devices. Without one of the private keys, it cannot generate the shared key that is used for further encryption. In this way, ECDH can generate a shared key over an insecure channel and encrypt the link.

Figure 4 shows how two devices can establish a shared secret when a third device is listening to the communication between them.

f4

Figure 4: Establishment of a shared secret when a third device is listening.

In phase 2, the ECDH key pair is generated and public keys are shared to authenticate devices and establish link encryption. To ensure that the device with which a device is communicating is the intended device, authentication is performed using one of the association models. The devices generate a Long Term Key (LTK) from the shared key of the ECDH process and proceed towards the second stage of the authentication check, which involves checking the DHKey.

In phase 3, the LTK is used to encrypt the link. Once the link is encrypted, keys are shared as indicated by Initiator Key Distribution/Responder Key Distribution flags in the pairing parameters (e.g., the IRK that is needed if the RPA is used).

Data Signing
Data signing is another feature available in BLE that helps add another level of security. BLE can use a Connection Signature Resolving Key (CSRK) to authenticate data when encryption is not used. To do this, a signature is generated using the signing algorithm and a counter. The counter is incremented with each Data PDU to avoid any replay attack. Note that Data Signing does not protect against passive eavesdropping. Rather, it verifies to the receiving device the authenticity of the device from where the data originated.

Bluetooth Low Energy 4.2 offers strong security mechanisms to enable secure wireless systems. Although BLE 4.1 and 4.2 offered features to guard against MITM, a truly secure BLE system can be implemented only using Bluetooth 4.2. When LE Legacy pairing is used (Bluetooth 4.1), only the OOB association model protects against passive eavesdropping. Bluetooth 4.2 also includes an additional association model called Numeric Comparison and uses an Elliptic Curve Diffie-Hellman algorithm to ensure privacy and data security.

For more details on Bluetooth 4.2 privacy and security features, see application note AN99209 or the Bluetooth Core Specification for more details on Bluetooth 4.2 features.


sachin-author-smSachin Gupta is a Staff Engineer, Product Marketing with Cypress Semiconductor. He loves working on different types of analog and digital circuits, as well as synthesizable codes. He holds a diploma in Electronics and Communications from Vaish Technical Institute and a Bachelors in Electronics and Communications from Guru Gobind Signh Indarprastha University, Delhi. He has eight years experience in SoC applications. He can be reached at sgup@cypress.com.

richa-author-smRicha Dham is Sr Marketing and Applications Manager with Cypress Semiconductor. Her interests lie in defining new solutions especially in the connectivity and IoT area. She completed her Masters in Technology in Communications Engineering from Indian Institute of Technology, Delhi (IITD).

Compute-Storage Proximity: When Storage at the Edge Makes More Sense

Monday, June 27th, 2016

Why it’s time to take a new look at high-performance storage at the edge.

When is localized storage more viable than a cloud solution? It’s a considerable debate for many organizations, and requires a thoughtful, case-by case examination of how stored data is used, secured and managed. Cloud migration may or may not be ideal, depending on considerations such as data accessibility, control and ownership. Data-intensive applications close to the source have great impact on how and why storage choices are made — considerations include mission-critical data sensitive to latency and bandwidth, as well as compliance and privacy issues.

A healthy dose of skepticism about what can be shipped to a public cloud, or even kept in a hybrid environment, must be balanced with a better understanding of how on-premise options have evolved. High-performance storage close to the compute source is not only relevant, but also much less complex and costly than it once was, thanks in large part to the emergence of software-defined storage. For system engineers, all these factors are driving the need for a smarter, more strategic look at storage options.

Figure 1: Storage management and labor costs are typically the largest single factor of TCO of on-site storage solutions. The advent of software-defined storage reduces barriers to deployment by reducing these costs dramatically.

Figure 1: Storage management and labor costs are typically the largest single factor of TCO of on-site storage solutions. The advent of software-defined storage reduces barriers to deployment by reducing these costs dramatically.

Storage at the Edge Competes on Cost and Complexity

High capacity storage is often pushed to the cloud because of a cost advantage. However, today that may be more of a perception than a reality—the landscape is changing dramatically with the advent of software-defined storage or hyperconvergence. The software-defined architecture hides the complexities of managing conventional storage and therefore helps reduce costs. Users can deploy and manage such systems themselves, reducing the need for a well-qualified IT resource dedicated to full-time management of terabytes or petabytes of data. This removes a significant barrier to deployment, given that historically as much as 70 percent of TCO would have been related to staff and physical labor required to maintain conventional storage on-site.

Even more compelling is the way software-defined storage eliminates the focus on the underlying storage hardware itself. System engineers must only define parameters that optimize their application, such as storage requirements and response time. The software then allocates existing resources, determining application needs and where best to store data. This may include any combination of disk types, such as spinning drives, solid-state or flash, or even tape. Adding nodes or replacing SSDs has no impact on the upper level application; with redundancy built in, the system will automatically switch to other resources without disrupting performance.

This is a critical advancement—in the past, system engineers would have had to ensure system or application code was configured to rely on a particular type of storage. Without a strong understanding of storage technologies, systems could become unbalanced based on under- or over-provisioned hardware assets. Now, software-defined advancements allow optimized, scalable storage solutions; this enables engineers to consider how they want to manage storage, rather than which individual pieces of hardware they need.

Defining Performance is Step One

To determine storage requirements and how they can be ideally managed, system engineers must of course understand the performance needs of their application. This sounds obvious, but is not necessarily an easy thing to accomplish. As storage system engineers try to create an optimized storage solution—a daunting task unto itself—primary performance factors need to be evaluated one by one. For example, storage capacity represents a benchmark, but it is really only one piece of the puzzle. How the solution performs in terms of latency, throughput, data integrity and reliability, are equally as critical as raw capacity.

Figure 2: Data storage is not one-size-fits-all, and is more than ever focused on optimizing application performance based on how data will be accessed, secured, upgraded and managed. Software-defined storage offers a range of flexible, high-performance options that reduce both complexity and cost.

Figure 2: Data storage is not one-size-fits-all, and is more than ever focused on optimizing application performance based on how data will be accessed, secured, upgraded and managed. Software-defined storage offers a range of flexible, high-performance options that reduce both complexity and cost.

Consider an application such as training and simulation, by definition intended to mirror a real-life training scenario through high-resolution video and graphics. Or an even more critical application such as real-time situational awareness, where decisions are made quickly based on accurate and timely information. Protecting against latency in these settings is a central concern, yet can be jeopardized without assessing the need for compute-storage proximity. Data that resides farther from its compute platform simply takes longer to arrive and feed the system; this model may not be reliable enough to ensure users get a fully responsive experience given the unpredictable nature of remote network connections. When the potential for delay can’t be tolerated by such applications, localized storage becomes the option of choice.

Considering Mission-Critical Data

The same type of assessment must be applied to a slate of other considerations, such as the need for control and ready access to stored data, or enhanced data privacy and security. Cloud hosting inherently creates a lack of control, and also relies on network availability and bandwidth which could be deemed a single point of failure. These may be unacceptable considerations if data is mission critical or impacts revenue. In other scenarios, for example applications such as healthcare and financial services, data environments are mandated to meet specific regulatory compliance requirements. Here a localized storage solution will more readily demonstrate the required data security and access control, helping engineers gain compliance ratings and customer confidence.

Because cloud computing is built on a virtualized platform, the actual place where data is stored or is in motion is also at times difficult to identify and trace. Despite cloud security advancements, security threats or data spills can be better managed with local data behind the fire-wall. Some data types are even prevented from crossing geographic borders, ruled by regulations meant to address data protection and ownership.

Insights at the Edge

The IoT’s growth has operational data being generated at an exponential scale, offering a new kind of business value enabled through real-time analytics. Uncovering customer usage models or equipment maintenance requirements—these and other crucial business insights largely depend on the speed with which data can be collected, analyzed and acted upon. Yet even with compression, big data is everywhere, with applications such as genome sequencers or even high-definition cameras producing a few megabits of data every second. In these industrial and other mission-critical applications, it may not make sense to move every piece of data to the cloud. Transporting such large data sets requires bandwidth with high QoS, racking up unnecessary costs quickly.

Instead, a localized storage strategy efficiently supports computationally intensive operations performed at the edge. Only analytics results are shared to the cloud, rather than moving all the data under costly bandwidth requirements.

A Changing Perspective

With the emergence of software-defined storage, the on-site vs cloud perspective has shifted. System engineers can now focus on how they want to serve clients and meet service level agreement (SLA) requirements, rather than on the storage and underlying hardware itself. Offload application data to the cloud and trust that it is going to work? Or keep it close at hand, hide the complexity and retain greater control? The answer lies in careful evaluation of your specific application’s needs for accessibility, control, data ownership, security and more. Compelling, cost-effective advantages can be gained with storage close to the source—such as reducing maintenance staff, improving data reliability and security, and addressing physical challenges of latency and bandwidth in transmitting data. For meaningful, time-sensitive analytics, it is vital that both the application and high performance storage capacity are in close proximity.

Organizations are finding it difficult to store and manage growing volumes of data. It’s a rising challenge with great impact on embedded design, given that embedded systems are generally where the data is being generated. Yet moving everything to the cloud may not be the only answer. It will be many years before we come close to knowing if that is even possible. The more likely reality is that the need for high-performance, localized storage will increase in step.

Finding Value in Software-Defined Storage

In traditional, conventional storage, a person or an application needs to be aware of all the specific hardware components. In the simplest terms, software-defined storage is a layer of abstraction that hides the complexity of the underlying compute, storage, and in some cases networking technologies.

In the software-defined model, storage systems are virtualized, pooled, aggregated and delivered as a software service to users. An organization then has a storage pool, created from readily available COTS storage components that offer longer life cycle by lowering OpEx and TCO over time. Software-defined systems even enable the creation and sharing of storage pools from the storage that is directly attached to servers. The storage management software may add further value by hiding the underlying complexity of managing high performance and scalable storage solutions. Some also provide open APIs, enabling integration with third-party management tools and custom-built applications.

Figure 3: Avoiding latency is essential, as users may not get a fully responsive experience from data that resides too far from its compute source.

Figure 3: Avoiding latency is essential, as users may not get a fully responsive experience from data that resides too far from its compute source.

Evolution continues, and within the software-defined storage market, there’s a general movement called “software-defined” everything. For example, there is software-defined networking or software-defined virtual function. Hyperconvergence is a follow-on trend, which essentially converges compute, storage, virtualization, networking and bandwidth onto a single platform and defines it in a software context. The software handles all underlying complexity, leaving only simple tasks for administrators to manage, and for clients to be served in a transparent and highly efficient manner.


Bilal-Khan_Dedicated-ComputingwebBilal Khan [bilal.khan@dedicatedcomputing.com] is Chief Technology Officer, Dedicated Computing. Khan spearheads technology innovation, product development and engineering, and deployment of the company’s systems and services strategy. Dedicated Computing supplies embedded hardware and software solutions optimized for high-performance applications, with expertise in data storage, embedded design, security tools and services, software stack optimization and consulting, and cloud business infrastructure design and management.

Virtual Prototyping and the IoT: A Q&A with Carbon Design Systems

Thursday, March 12th, 2015

A designer of high-performance verification and system integration products highlights the virtual prototyping/IoT relationship and explains the more prominent part pre-built virtual prototypes now play.

Editor’s note: Our thanks Bill Neifert, chief technology officer of Carbon Design Systems and a Carbon co-founder who recently offered his insights on a number of questions.

EECatalog: How much embedded design activity are you seeing for IoT?

bill_neifertBill Neifert, Carbon Design Systems: IoT is an intriguing space since exactly what this means varies from source to source. Regardless though, the fundamental premise of IoT is taking a device and placing it on the Internet. While this seems simple enough at first, it introduces substantial design complexity along with a lot of additional potential. Any device connected to the Internet needs to be able to interact securely, which requires a good amount of design effort and proper design practices. In addition, since connected features are seen as a means for differentiation, it’s common for the connected capabilities to be leveraged to integrate other features such as remote controllability and notification. All of this complexity drives a need for additional embedded development.

EECatalog: How can IoT designers take advantage of virtual prototyping, especially when security is a consideration?

Neifert, Carbon Design Systems: Virtual prototypes add value to IoT development in multiple ways. Since IoT devices typically are more consumer-focused, time to market is often a key way to differentiate. Virtual prototypes are able to pull in design schedules by parallelizing the hardware and software design efforts. In addition, since security plays a big role in IoT, accurate virtual prototypes can help ensure that all of the system’s corner cases have been validated early in the design.

EECatalog: What trends are you seeing that will affect embedded designers in 2015?

Neifert, Carbon Design Systems: The primary trend that will continue to affect embedded designers in 2015 is the mass migration of designs to the ARM architecture. ARM has long been the dominant player in the mobile space. In the past few years though, it has achieved significant penetration into other market verticals. We’ve seen strong adoption of ARM processors in a number of areas, winning design starts away from both internal offerings as well as other processor IP vendors. We’re seeing this trend reflected in our own customer base. A year ago, the majority of our virtual prototypes were used in SoC designs focused on the mobile space. While mobile is still a widely used application, in just the past 12 months, we’ve seen new ARM-based design starts in servers, base-stations, storage, sensors and industrial applications. In many cases, this is the first time that the design team is using ARM IP. If you’re an embedded designer and your current project isn’t using an ARM processor, there’s a good chance that you’ll use one on your next project.

EECatalog: How has virtual prototyping changed the way in which embedded designers work?

Neifert, Carbon Design Systems: In the past few years, multicore designs have become far more prevalent. Although this puts a lot more power into the hands of the designer, it also introduces substantial complexity from both the hardware and software design perspective. Virtual prototypes empower designers with the tools to handle this complexity. Accurate virtual prototypes enable architects to ensure that the performance goals of the chips used to drive embedded designs are being met. Validation and verification engineers are leveraging virtual prototypes to ensure that the interactions between system software and the hardware being designed are correct. Finally, firmware and software engineers are able to leverage the early availability, speed and visibility of virtual prototypes to design software earlier and debug problems that would take much longer to isolate in real hardware. There’s no way for real silicon or even hardware prototypes to match the visibility and debuggability, which is standard in a virtual prototype.

Recently, pre-built virtual prototypes such as Carbon Performance Analysis Kits (CPAKs) have been playing a much larger role in the embedded design process. These pre-validated systems come complete with all of the hardware and software models needed to be productive. Designers are typically up and running within minutes of download. The system serves as a great starting point to accurately model the performance of the embedded design long before it is built. Since pre-built virtual prototypes support both 100 percent accurate execution as well as 100 MIPS performance, they can be used by all of the design teams in an embedded design. They enable embedded designers to spend less time creating a virtual prototype and more time using that virtual prototype to be productive.


anne_fisherAnne Fisher is managing editor of EECatalog.com. Her experience has included opportunities to cover a wide range of embedded solutions in the PICMG ecosystem as well as other technologies. Anne enjoys bringing embedded designers and developers solutions to technology challenges as described by their peers as well as insight and analysis from industry leaders. She can be reached at afisher@extensionmedia.com

Authentication Using SRAM Physically Unclonable Function In FPGA Optimizes M2M Network Security

Wednesday, March 11th, 2015

Together, the PUF and PKI establish a strong identity and association for every machine and their communication in the virtual private network, ensuring they are protected and can be used in M2M and IoT applications safely, securely, and with confidence.

The number of devices capable of machine-to-machine (M2M) communication is exploding across many types of networks, and nearly all associated traffic is vulnerable to malicious monitoring and modification. Communication must be secured against these attacks if the connected machines are to be used safely and with confidence. Among available security services, authentication is the most important for M2M communication security, and is most effective when a physically unclonable function (PUF) is used to compute the private key for identifying a device. Also critical is the use of a public key infrastructure (PKI) for distributing and ensuring associated public keys are authentic.

Importance of Asymmetric Authentication Using a PUF
The two security services to consider are confidentiality and authenticity. While confidentiality typically uses encryption to protect information from being learned by unauthorized viewers, authenticity goes further to ensure the message arrived intact from a known (trusted) source with no undetected errors. Although authenticity isn’t as well known as confidentiality it is the superior cryptographic service for M2M communication. For example, in establishing a secure source of network time for synchronization, it is far less important to hide the payload (since the correct time is not a secret) than it is to make sure that the received time value is tamper-free and originated from a trusted source.

The best authentication approach uses asymmetric, rather than symmetric, cryptography to verify a message’s true source through the application of a secret key. In symmetric cryptography, the sender and receiver machines share a secret key, which doesn’t work well for larger networks—either the same key must be used by all the nodes, which presents an unacceptable security risk, or different keys must be used between each device pair, which is unwieldy because the number of keys can grow exponentially with the number of nodes.

In contrast, asymmetric or “public key” cryptography employs a private key that the sender uses to digitally sign outgoing messages and is known only to that node. Verifying the digital signature using the sender’s associated public key proves message authenticity. Only one device has to store the secret (private) key and any number of devices can use the public key, which can be transmitted with the message and needn’t be confidential or stored permanently by recipients. Only breached devices must be removed from the network and refitted with replacement key pairs, and the overall system scales linearly with the number of nodes—a big improvement over symmetric key systems. Hybrid schemes deliver the same overall effect using a private key to establish an ephemeral symmetric session key and symmetric MAC tag for authentication, and are usually more computationally efficient.

The best private key is computed from a PUF, which is based on a physical characteristic that is created, effectively unintentionally, during a device’s manufacture. This characteristic is unique for each copy of the device due to small (often atomic scale), uncontrollable (and therefore impossible to clone) and yet measurable random manufacturing variations. Their measurements are analogous to a fingerprint or other ‘biometric’ and can be used to construct a private key specific to that particular device.

Advantage of SRAM PUFs
One of the best characterized and reliable types of memory PUF is an SRAM PUF. Created on a smartcard chip, FPGA or other IC, it works by measuring the random start-up state of the bits in a block of SRAM. Each SRAM bit comprises two nominally equal—but not completely identical —cross-coupled inverters. When power is first applied to the IC, each SRAM bit will start up in either the “one” or “zero” state based on a preference that is largely “baked in” during IC manufacturing (Figure 1).
In exceptionally well-balanced inverters, thermal noise may cause the bit to occasionally overcome the baked-in preference and start up in the opposite state, but the preference generally overcomes any dynamic noise. Plus, noise due to temperature, lifetime and other environmental factors is accounted for through error correction techniques, ensuring that noisy bits can be corrected and restored to values that were recorded at PUF enrollment. That way, the same key is reconstructed at each turn-on.

figure_1
Figure 1. SRAM bits are comprised of two nominally identical cross-coupled inverters. Small random variations that are “baked in” during manufacturing cause each bit to have a preferred start-up state that is used to compute a secret key.

The SRAM PUF can be designed to guarantee perfect key reconstruction over all environments and its full lifetime with errors as low as one per billion. This infrequent failure is detectable with high probability, and all that is usually required is to try again to get the correct key. Additionally, protection of the SRAM PUF’s secret key is particularly strong because when power is off, the SRAM PUF’s secret effectively disappears from the device, and if the activation code (error correction data) is erased, the PUF secret key cannot be reconstructed no matter how thoroughly the device is subsequently analyzed.

Distributing Public Keys and Ensuring Their Authenticity
In addition to using asymmetric authentication with PUF-based private keys, a PKI should be used to distribute and ensure the associated public keys are authentic. In a PKI, a certificate authority (CA) certifies all the approved devices that belong to the network by digitally signing their public keys using the CA’s own private key.

If the message has been tampered with, its digital signature will not verify correctly. To ensure the public key is authentic, the recipient also has to check the CA’s digital signature on the certificate using the CA’s public key, which is generally pre­placed in every device by the manufacturer or network operator, and is inherently trusted. This creates a hierarchical certificate-based chain-of-trust within which the identity of every legitimate machine in the network can be known with very high assurance. Messages can be attributed to these machines with high confidence, and imposter machines and forged messages can easily be detected.

Implementing SRAM PUFs in FPGAs and SoC FPGAs
FPGAs and SoC FPGAs provide many benefits in M2M applications due to their inherent flexibility and high number of I/0 pins. With today’s technology, the FPGA’s SRAM PUF is used to establish a pre-configured certified identity for each device. In the case of Microsemi’s SmartFusion2 SoC FPGAs and IGLOO2 FPGAs, Microsemi acts as the certificate authority.

To implement PUF technology in an FPGA or SoC FPGA, the devices must include built-in cryptographic capabilities such as hardware accelerators for AES, SHA, HMAC, and elliptic curve cryptography (ECC), plus a cryptographic­grade true random bit generator. These capabilities can be used to create a user PKI with the user’s own certificate authority blessing each legitimate machine in the network. Each machine has a chain-of­trust from the user’s well-protected root-CA keys all the way down to the high-assurance identity established at the atomic level by the FPGA’s PUF.


richard_newellRichard Newell is senior principal product architect, Microsemi Corp., SoC Group. He plays a key role in planning the security features for the current and future generations of flash-based FPGAs and SoC FPGAs. Richard has an electrical engineering background with experience in analog and digital signal processing, cryptography, control systems, inertial sensors and systems, and FPGAs. He is an alumnus of the University of Iowa. Richard is the recipient of approximately one dozen U.S. patents, and is a member of the Tau Beta Pi and Eta Kappa Nu honorary engineering societies.

Security and the IoT: A Q&A with Rambus

Wednesday, March 11th, 2015

Thoughts on IoT semiconductor growth and how addressing security concerns is paramount.

Editor’s note: Our thanks to Steve Woo, vice president, Enterprise Solutions Technology and distinguished inventor at Rambus, who recently offered his insights on a number of questions.

EECatalog: Steve, you have what you call a cautious take on IoT semiconductor market growth, can you elaborate?

steve_wooSteve Woo, Rambus: One of the things that was really encouraging that I saw at CES is the tremendous progress in terms of IoT devices and interoperability. A couple of years ago there were a lot of people looking at interesting ideas and technology, but what you began to realize is that in most of those cases the technology itself wasn’t the challenge (and don’t get me wrong, there are individual challenges), but you did get the feeling, “maybe that isn’t the hardest part of all of this—maybe the hardest thing that has come to light over the last couple of years is that the challenges are going to be in interoperability.”

We need standards and methods that people have agreed upon to have these devices interoperate. And we also have to work on some other technical things like power and security. What was interesting at CES this year is all of those topics got a lot of attention—some more than others—it is the kind of thing you would expect to happen as an industry starts to try to work together and go from the crawling stage to the walking stage.

A recent Gartner report projected growth in the semiconductor industry, and I think [growth of that caliber] is definitely possible if you examine industries that are looking for ways to connect a lot of devices together.

It is definitely the case that the automobile industry is looking at a lot more technology in cars. And it’s also the case that there is a lot more discussion about home automation and wearable devices. So it is definitely possible for us to see the growth [the Gartner report] predicted, but I think there are some issues that need to get addressed.

One of the things that came up a few times at CES this year is: the technology making the devices is very doable in many cases, but the real issue is: do you have a reasonable business model that allows you to stay in business with these devices—that is a tougher challenge, and the reason why you tend to see a number of companies coming in and out of the market so quickly for wearables and IoT devices. The market’s not mature, and the business models are unproven at this point [although] there are undoubtedly some that will work very well.

[However] if you’ve got established markets and you’re trying to overlay or inject IoT-type devices into the markets, it seems like an easier path than it is to generate a whole new business model where you are not going to try to tap into an existing revenue stream.

Another challenge is power. People are saying, “gosh, I don’t want to have to take off my device every day to recharge it.” For other types of devices, like beacons, the goal is to provide enough power so that the device can stay in the field for years and be powered by something like a watch battery. Two areas where there are challenges in meeting power limitations are communication and on-chip storage. Moving data on and off a device takes power, and the power can be prohibitive if lots of data is moving to and from the device over long distances. Storing data on-chip can also be a challenge, as persistent storage is needed that doesn’t require a lot of power to store and retrieve the data.

Fortunately, the industry has been busy addressing these and other challenges in recent years. Bluetooth LE addresses communication challenges by optimizing power for short-range communication with other devices or a hub that can provide longer-distance communication like a cell phone or a Nest thermostat. Interoperability still needs to be worked out, but the technology is definitely getting better for addressing power concerns related to communication. Because of the sheer volume of data that can be captured and generated by IoT devices, some infrastructures may opt to minimize communication to save power by performing some localized pre-processing on the IoT devices and/or the communication hubs that sit between the IoT devices and data centers.

EECatalog: What do architects and chip designers need to do in order to improve chip and system security?

Woo, Rambus: When you start having things like automated homes and automated cars, what you are also doing is offering more ports and entryways into the system, and every one of those has to be secured. If not done correctly, then that just means that there are more opportunities to get into chips and systems.
People have come to the realization that security is a goal that needs to be treated as a ‘first class design parameter,’ which means that it needs to be thought about during the definition phase for an architecture and a product.

EECatalog: What is the advantage of building security into silicon?

Woo, Rambus: There are definitely levels of security that you can provide, and in part it’s about the types of tradeoffs one is willing to accept. Some systems choose to do everything in software because it’s relatively easy to deploy and to layer on top of existing systems. The problem is that software-only protection can be hacked, and we’ve seen numerous cases of that in the past year alone. This goes back to the point about treating security as a first-class design parameter—legacy systems often weren’t designed with software security in mind, so the system doesn’t enable software to do the best job possible for securing the system.

Our view is that having a hardware root of trust integrated into the silicon enables the highest levels of security. The key issue is that from the moment power is applied to the system or device, the first thing that comes up is the hardware, and at this point the chip or system can be attacked. Solutions exist that use hardware as a basis for security, but having that hardware integrated into the silicon increases the security of the silicon and system.

Having features built into that hardware, we believe, is the best way to secure the hardware. Again, there are tradeoffs to be made, and in some cases people will be willing to live with lower levels of security provided by software-only solutions. But as you begin to interconnect more and more devices, some are inevitably going to want higher levels of security, so providing hardware security and a hardware root of trust is going to be very important going forward.

You might have a device where you say, “I really need it to be secure for the next five years. One of the nice things about that is that if you think about adding hardware elements or something like that and they are in silicon, Moore’s Law has done a great job of naturally cost reducing any kind of hardware element that you put into a system. One way to secure the interaction of two communicating devices is to provide hardware elements in both of those devices that each device can use to authenticate the other. Moore’s Law enables these devices to include this functionality at reduced cost over time, and/or to provide more complex authentication mechanisms over time.

EECatalog: What led to the development of your hardware root of trust offering, CryptoManager?

Woo, Rambus: We are excited about the CryptoManager platform. On the surface it provides some very interesting capabilities, but when you dig a little deeper and you look at how devices are used you begin to realize that the elements that are contained within Crypto Manager actually offer a very powerful tool kit that allows you do to things that are beyond what you might think about initially.

One of the things that CryptoManager has is a hardware root of trust that provides a secure foundation for connected communication. This core allows you to very securely enable and disable features and functionality in the chip that core sits in, and secures the chip throughout the lifecycle from manufacturing through deployment and end of life. The secure core acts like a vault door, where unless you know the combination and can open the vault door, you cannot gain access to anything inside the vault.

CryptoManager is a platform that allows us, using that core, to secure a semiconductor device throughout the device lifecycle, and to enable and disable features by managing keys that can lock and unlock functionality. One thing that CryptoManager enables is that as the silicon travels from facility to facility—for example, from fab to wafer cutting to die packaging to testing packaged die to fabrication to integration into a device like a phone—is to ensure that the semiconductor device itself has manufacturer-specific keys put in there that no one else can get to or manufacturer-specific capabilities enabled or disabled.

A great example is managing access to the JTAG port of a chip. During device test, you need access to the JTAG port. The problem with JTAG and other debug ports is that it is almost like having the master keys to the house. [Via debug ports] you can get deep access to many areas of a chip, and once the device is in the field you may not want people to get access to some or all of these areas.

What CryptoManager can do is enable/disable access to things like debug ports. For example, you can turn on the debug port only when the device is being debugged, and once it leaves the factory you can turn off that port so no one else can get through there.

What that also means is that once the device leaves, say, the phone manufacturing facility and gets deployed into the field, you can actually enable and disable features in the silicon itself, so that you can now think about new kinds of business models where carriers can enable and disable features on the phone. Or you could enable or disable certain kinds of content to be played on that phone so you get this interesting way of looking at new revenue models and usage models—and it all relies on the same CryptoManager platform and toolkit that manages keys to enable and disable functionality.


anne_fisherAnne Fisher is managing editor of EECatalog.com. Her experience has included opportunities to cover a wide range of embedded solutions in the PICMG ecosystem as well as other technologies. Anne enjoys bringing embedded designers and developers solutions to technology challenges as described by their peers as well as insight and analysis from industry leaders. She can be reached at afisher@extensionmedia.com