New HSA Spec Legitimizes AMD’s CPU+GPU Approach

After nearly 3 years since the formation of the Heterogeneous System Architecture (HSA) Foundation, the consortium releases 1.0 version of the Architecture Spec, Programmer’s Reference Manual, Runtime Specification and a Conformance Plan.

Note: This blog is sponsored by AMD.

HSA banner


UPDATE 3/17/15: Added Imagination Technologies as one of the HSA founders. C2

No one doubts the wisdom of AMD’s Accelerated Processing Unit (APU) approach that combines x86 CPU with a Radeon graphic GPU. Afterall, one SoC does it all—makes CPU decisions and drives multiple screens, right?

True. Both AMD’s G-Series and the AMD R-Series do all that, and more. But that misses the point.

In laptops this is how one uses the APU, but in embedded applications—like the IoT of the future that’s increasingly relying on high performance embedded computing (HPEC) at the network’s edge—the GPU functions as a coprocessor. CPU + GPGPU (general purpose graphics processor unit) is a powerful combination of decision-making plus parallel/algorithm processing that does local, at-the-node processing, reducing the burden on the cloud. This, according to AMD, is how the IoT will reach tens of billions of units so quickly.

Trouble is, HPEC programming is difficult. Coding the GPU requires a “ninja programmer”, as quipped AMD’s VP of embedded Scott Aylor during his keynote at this year’s Embedded World Conference in Germany. (Video of the keynote is here.) Worse still, capitalizing on the CPU + GPGPU combination requires passing data between the two architectures which don’t share a unified memory architecture. (It’s not that AMD’s APU couldn’t be designed that way; rather, the processors require different memory architectures for maximum performance. In short: they’re different for a reason.)

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD realized this limitation years ago and in 2012 catalyzed the HSA Foundation with several companies including ARM, Texas Instruments, Imagination Technology, MediaTek, Qualcomm, Samsung and others. The goal was to create a set of specifications that define heterogeneous hardware architectures but also create an HPEC programming paradigm for CPU, GPU, DSP and other compute elements. Collectively, the goal was to make designing, programming, and power optimizing easy for heterogeneous SoCs (Figure).

Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

The HSA Foundation’s goals are realized by making the coder’s job easier using tools—such as an HSA version LLVM open source compiler—that integrates multiple cores’ ISAs. (Courtesy: HSA Foundation; all rights reserved.) Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

After three years of work, the HSA Foundation just released their specifications at version 1.0:

  • HSA System Architecture Spec: defines H/W, OS requirements, memory model (important!), signaling paradigm, and fault handling.
  • Programmers Reference Guide: essentially a virtual ISA for parallel computing, defines an output format for HSA language compilers.
  • HSA Runtime Spec: is an application library for running HSA applications; defines INIT, user queues, memory management.

With HSA, the magic really does happen under the hood where the devil’s in the details. For example, the HSA version LLVM open source compiler creates a vendor-agnostic HSA intermediate language (HSAIL) that’s essentially a low-level VM. From there, “finalizers” compile into vendor-specific ISAs such as AMD or Qualcomm Snapdragon. It’s at this point that low-level libraries can be added for specific silicon implementations (such as VSIPL for vector math). This programming model uses vendor-specific tools but allows novice programmers to start in C++ but end up with optimized, performance-oriented, and low-power efficient code for the heterogeneous combination of CPU+GPU or DSP.

There are currently 43 companies involved with HSA, 16 universities, and three working groups (and they’re already working on version 1.1). Look at the participants, think of their market positions, and you’ll see they have a vested interest in making this a success.

In AMD’s case, as the only x86 and ARM + GPU APU supplier to the embedded market, the company sees even bigger successes as more embedded applications leverage heterogeneous parallel processing.

One example where HSA could be leveraged, said Phil Rogers, President of the HSA Foundation, is for multi-party video chatting. An HSA-compliant heterogeneous architecture would allow the processors to work in a single (virtual) memory pool and avoid the multiple data set copies—and processor churn—prevalent in current programming models.

With key industry players supporting HSA including AMD, ARM, Imagination Technologies, Samsung, Qualcomm, MediaTek and others, a lot of x86, ARM, and MIPS-based SoCs are likely to be compliant with the specification. That should kick off a bunch of interesting software development leading to a new wave of high performance applications.

Coke Machine with a Wii Mote? K-Cup Coffee Machine Marries Oculus Rift?

Talk about some kind of an weird Coke-machine-meets-Minority Report mash-up, but the day is here where vending machines will do virtual reality tricks, access your Facebook page, and be as much fun as Grand Theft Auto. And they’ll still dispense stuff.


By Jeff Moriarty (Custom Coke machine – operating screen) [CC BY 2.0 (], via Wikimedia Commons

According to embedded tech experts ADLINK Technology and Intel, the intelligent vending machine is here. Multiple iPad-like screens will entertain you, suggest which product to buy, feed you social media, and take your money.

These vending machines will be IoT-enabled, connected to the cloud and multiple online databases, and equipped with multiple cameras. Onboard signal processing will respond to 3D gesture control or immerse you in a virtual reality scenario with the product you’re trying to buy. Facial recognition, voice recognition, and even following your eyes as you roam the screens will be the norm.

Facial recognition via Intel's Perceptual Computing SDK. (Courtesy: Intel Corporation.)

Facial recognition via Intel’s Perceptual Computing SDK. (Courtesy: Intel Corporation.)

For you, the customer, the vending machine experience will be fun, entertaining—and very much like a video game. For the retailer, they’re hoping to make more money off of you while using remote IoT monitoring, predictive diagnostics, and “big data” to lower their costs.

The era of the intelligent vending machine is upon us. The machine’s already connected to the Internet…and one of many billions of smart IoT nodes coming to a store near you.

Read more about them here.

Virtual, Immersive, Interactive: Performance Graphics and Processing for IoT Displays

Vending machines outside Walmart

Current-gen machines like these will give way to smart, IoT connected machines with 64-bit graphics and virtual reality-like customer interaction.

Not every IoT node contains a low-performance processor, sensor and slow comms link. Sure, there may be tens of billions of these, but estimates by IHS, Gartner, Cisco still infer the need for billions of smart IoT nodes with hefty processing needs. These intelligent IoT platforms are best left to 64-bit algorithm processors like AMD’s G-and R-Series of Accelerated Processing Units (APU). AMD’s claim to fame is 64-bit cores combined with on-board Radeon graphics processing units (GPU) and tons of I/O.

As an example, consider this year’s smart vending machine. It may dispense espresso or electronic toys, or maybe show the customer wearing virtual custom-fit clothing. Suppose the machine showed you–at that very moment–using or drinking the product in the machine you were just starting at seconds before.

Far fetched? Far from it. It’s real.

These machines require a multi-media, sensor fusion experience. Multiple iPad-like touch screens may present high-def product options while cameras track customers’ eye movements, facial expressions, and body language in three-space.

This “visual compute” platform will tailor the display information to best interact with the customer in an immersive, gesture-sort of experience. Fusing all these inputs, processing the data in real-time, and driving multiple displays is best handled by 64-bit APUs with closely-coupled CPU and GPU execution units, hardware acceleration, and support for standards like DirectX 11, HSA 1.0, OpenGL and OpenCL.

For heavy lifting in visual compute-intensive IoT platforms, keep an eye on AMD’s graphics-ready APUs.

If you are attending Embedded World February 24-26, be sure to check out the keynote Heterogeneous Computing for an Internet of Things World,” by Scott Aylor, Corporate VP and General Manager, AMD Embedded Solutions on Wednesday the 25th at 9:30.

This blog was sponsored by AMD.

What’s the Nucleus of Mentor’s Push into Industrial Automation?

Mentor’s once nearly-orphaned Nucleus RT forms the foundation of a darned impressive software suite for controlling meat packing or nuclear power plants.

GlassesEveryone appreciates an underdog—the pale, wimpy kid with glasses and brown polyester sweater who gets routinely beaten up by the popular boys—but sticks it out day after day and eventually grows up to create a tech start-up everyone loves. (Part of this story is my personal history; I’ll let you guess which part.)

So it is with Mentor’s Nucleus RTOS, which the company announced forms the basis for the recent initiative into Industrial Automation (I.A.). Announced this week at the ARC Industry Forum in Orlando is Mentor’s “Embedded Solution for Industrial Automation” (Figure 1).  A cynic might look at this figure as a collection of existing Mentor products…slightly rearranged to make a compelling argument for a “solution” in the I.A. space.  That skinny kid Nucleus is right there, listed on the diagram. Oh, how many times have I asked Mentor why they keep Nucleus around only to get beaten up by the big RTOS kids!

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure.

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure. 

After all, you’ll recognize Mentor’s Embedded Linux, the Nucleus RTOS I just mentioned, and the company’s Sourcery debug/analyzer/IDE product suite. All of these have been around for a while, although Nucleus is the grown-up kid in this bunch. (Pop quiz: True or False…Did all three of these products came from Mentor acquisitions? Bonus question: From what company(ies)?)

Into this mix, Mentor is adding new security tools from our friends at Icon Labs, plus hooks to a hot new automation GUI/HMI called Qt. (Full disclosure: Icon Labs founder Alan Grau is one of our security bloggers; however, we were taken by surprise at this recent Mentor announcement!)

Industry 4.0: I.A. meets IoT

According to Mentor’s Director of Product Management for Runtime Solutions, Warren Kurisu (whose last name is pronounced just like my first name in Japanese: Ku-ri-su), I.A. is gaining traction, big time. There’s a term for it: “Industry 4.0”. The large industrial automation vendors—like GE, Siemens, Schneider Electric, and others—have long been collecting factory data and feeding it into the enterprise, seeking to reduce costs, increase efficiency, and tie systems into the supply chain. Today, we call this concept the Internet of Things (IoT) and Industry 4.0 is basically the promise of interoperability between currently bespoke (and proprietary) I.A. systems with smart, connected IoT devices plus a layer of cyber security thrown in.

Mentor’s Kurisu points out that what’s changed is not only the kinds of devices that will connect into I.A. systems, but how they’ll connect in more ways than via serial SCADA or FieldBus links. Industrial automation will soon include all the IoT pipes we’re reading about: Wi-Fi, Bluetooth LE, various mesh topologies, Ethernet, cellular—basically whatever works and is secure.

The Skinny Kid Prevails

Herein lies the secret of Mentor’s Industrial Automation Solution. It just so happens the company has most of what you’d need to connect legacy I.A. systems to the IoT, plus add new kinds of smart embedded sensors into the mix. What’s driving the whole market is cost. According to a recent ARC survey, reduced downtime, improved process performance, reduced  machine lifecycle costs—all of these, and more, are leading I.A. customers and vendors to upgrade their factories and systems.

Additionally, says Mentor’s Kurisu, having the ability to consolidate multiple pieces of equipment, reduce power, improve safety, and add more local, operator-friendly graphics are criteria for investing in new equipment, sensors, and systems.

Mentor brings something to the party in each of these areas:

- machine or system convergence, either by improved system performance or reduced footprint

- capabilities and differentiation, allowing I.A. vendors to create systems different from “the other guys”

- faster time-to-money, done through increased productivity, system design and debug, or anything to reduce the I.A. vendor’s and their customer’s efforts.

Graphic - Industrial Automation Flow

Figure 2: Industrial automation a la Mentor. The embedded pieces rely on Nucleus RTOS, or variations thereof. New Qt software for automation GUI’s plus security gateways from Icon Labs bring security and IoT into legacy I.A. installations.

Figure 2 sums up the Mentor value proposition, but notice how most of the non-enterprise blocks in the diagram are built upon the Nucleus RTOS.

Nucleus, for example, has achieved safety certification by TÜV SÜD complete with artifacts (called Nucleus SafetyCert). Mentor’s Embedded Hypervisor—a foundational component of some versions of Nucleus—can be used to create a secure partitioned environment for either multicore or multiple processors (heterogeneous or homogeneous), in which to run multiple operating systems which won’t cross-pollute in the event of a virus or other event.

New to the Mentor offering is an industry-standard Qt GUI running on Linux, or Qt optimized for embedded instantiations running on—wait for it—Nucleus RTOS. Memory and other performance optimizations reduce the footprint, boot faster, and there are versions now for popular IoT processors such as ARM’s Cortex-Mx cores.

Playground Victory: The Take-away

So if the next step in Industrial Automation is Industry 4.0—the rapid build-out of industrial systems reducing cost, adding IoT capabilities with secure interoperability—then Mentor has a pretty compelling offering. That consolidation and emphasis on low power I mentioned above can be had for free via capabilities already build into Nucleus.

For example, embedded systems based on Nucleus can intelligently turn off I/O and displays and even rapidly drive multicore processors into their deepest sleep modes. One example explained to me by Mentor’s Kurisu showed an ARM-based big.LITTLE system that ramped performance when needed but kept the power to a minimum. This is possible, in part, by Mentor’s power-aware drivers for an entire embedded I.A. system under the control of Nucleus.

And  in the happy ending we all hope for, it looks like the maybe-forgotten Nucleus RTOS—so often ignored by editors like me writing glowingly about Wind River’s VxWorks or Green Hill’s INTEGRITY—well, maybe Nucleus has grown up.  It’s the RTOS ready to run the factory of the future. Perhaps your electricity is right now generated under the control of the nerdy little RTOS that made it big.

Eye diagrams don’t lie: ReDrivers, before and after

USB, HDMI, or analog signals degrade to mush over long signal traces or cables; check out these “before and after” videos for proof.

Note: this particular blog posting is sponsored by Pericom Semiconductor.

I’ve been writing about Pericom Semiconductor’s signal integrity products for the last few months, and it’s been shocking to see digital signals acting like analog audio waveforms and bouncing around like techno music on a DJ’s VU meter.

Noise, attenuation, cross-talk and other high frequency nasties take clean rise/fall edges at the signal source and turn them all sloppy and jittery after only a few interconnects…or long cable runs.

Here’s what a good high frequency SERDES signal should look like (right).

Before and after

What better way to showcase the benefits of signal integrity ReDrivers and adaptive equalization (AEQ) than with some scope pix?

Seeing is believing: check out the following eye diagrams.

#1 USB 3.0 – 5 Gbps never looked so good

Using a standard PC motherboard and a 36-inch trace test board, the degraded (closed “eye”) USB 3.0 signal is shown below. Note that a USB 3.0 receiver at the destination end would not recover data and the system just wouldn’t work.  The cause: PCB vias, traces, connectors, and cabling ruin the 5 Gbps digital signal.

Figure 1 USB closed eye

This USB 3.0 system wouldn’t work due to noise, jitter and out-of-spec signals.

By adding a tiny (2mm x 2mm) USB 3.0 ReDriver amplifier, the figure below shows a restored signal at the receiver and 5 Gbps data flow.  Note the open “eye”, the margin above/below the signal continuum, and the tight timing on the overlapped traces. This is a very clean signal and the system works within specs. (The complete PC motherboard set-up is in this USB 3.0 video. )

Figure 2 USB after ReDriver#2 HDMI seeing is believing

Our “always on” society bombards us with LCD screens everywhere we go. HDMI is the preferred video source in home theatre, conference center, digital signage, and other remote LCD setups. Second and third screens connected to modern laptops also rely on HDMI. But at 2.5 Gbps, HDMI is a serial standard that’s highly susceptible to signal integrity problems.

Using a PC’s display port as the video source, an extreme case was rigged in a lab using a long 30 m HDMI cable. The SI results are predictably bad, as shown below.

Figure 3 HDMI closed eyeAdding a signal recovery and conditioning HDMI ReDriver, the unrecognizable signals from above were astoundingly improved to an open “eye” with ample margin, as shown below. HDMI ReDrivers not only return the signal to acceptable levels, they come in flavors that add desirable video features such as equalization, splitters, level shifts, color correction, and more. (Short video with demo set-up is here.)

Figure 4 HDMI open eye#3 CCTV analog video

While not a digital signal, I threw this one in because it’s a real world example of poor analog signal integrity that you can see on a screen.

Scenario: Most closed-circuit TV surveillance systems are still analog because they’re cheap, and systems are characterized by long analog cable runs from source to viewing destination. PCB connectors, traces and video CODEC ICs further degrade signals.

The lab set-up: 500m COAX cable; signal generator; AEQ “on” and “off”. In the “Before” picture lousy, lossy signals and attenuation create a distorted and “wavey” test pattern. Unlike degraded digital (serial) USB 3.0 and HDMI signals that won’t recover at the receiver, SI-affected analog signals still carry information but the output is unacceptable in surveillance applications.

Figure 5 distorted analogAfter a video decoder is added at the receiving end and adaptive equalization algorithms are enabled (AEQ = “on”), the result is shown below. The image is recovered and enhanced with no loss of detail. This is after a whopping 500 m (~ 1/3 mile) of cable. (Video is here.)

Figure 6 AEQ afterConclusions

Signals degrade, and the faster things clock or the longer are the traces/cables, the worse things get. In gigahertz signals, ReDrivers from companies like Pericom Semiconductor clean up signals and make usable PCIe, USB 3.0, HDMI and other standards. And even in pure analog signals, video decoders with DSP-based AEQ can dramatically clean up signals.

Industrial equipment talking on the IoT? Better get a gateway (device).

Editor’s note: This particular blog is sponsored by ADLINK.

Forget about controlling your garage door or AC from your smartphone; these are just a hat trick on the Internet of Things. The real IoT deals with existing commercial and industrial devices becoming “wired” to the cloud.

The Internet of Things (IoT) is a system of systems.

The Internet of Things (IoT) is a system of systems.

Market data firm IDC estimates that 85 percent of the tens of billions of nodes and sensors needed for the Internet of Things (IoT) already exist “within installed infrastructure”.

That means they’re already powered up and doing their thing…but they’re not necessarily IoT-ready. They might be standalone stoplight controllers at a small town intersection blinking red-yellow-green in the middle of the night, or lower-tech vending machines stocked weekly with grape soda (in homage to the movie “Up”). More sophisticated—but still standalone—systems can include building HVAC or FACP (fire alarm control panel) equipment at a local senior center. None of these systems were designed for Internet connectivity. But all of them and billions more are candidates for being remotely controlled, maintained, and most importantly: sharing their data on the IoT.Vending machines outside Walmart

The promise of the IoT with legacy industrial systems like these is unlocking the data they contain (and monetizing it), remotely healing faults, predicting maintenance, and more. Some nodes can be retrofit with short-range wireless capability via 802.15.4 or 802.11x (Wi-Fi), but most will maintain their original analog or digital I/O interfaces from proprietary to TTL to RS-485. Concentrating individual nodes and sensors into a group requires a gateway that aggregates and secures data, makes intelligent decisions, and secures the connection to the Internet cloud using WiFi, 4G, or Ethernet.

The gateway will endure harsh environments: from vibrating factory floor to scorching rooftop—with no fans to lower MTBF. It’ll also be small, maybe shoebox-sized at most, easy on power, and flexible enough to accommodate modular hardware needed to work with any legacy sensor, system or future IoT node.

Data? Nope—We Want Decisions

The gateway needs some serious horsepower. From translating local protocols and legacy H/W interfaces into the IPV6, TLS and HTTPS language of the Internet, the gateway’s CPU needs to pass intelligent data onto the IoT—not just raw data. The distinction is huge. Merely routing data from local sensors onto the Internet is not the point of the IoT. Instead, aggregating that data and rolling it up via local algorithms (remotely loaded) into “actionable intelligence” allows the gateway’s operator to interpret machine or sensor trends and make big picture system-of-system decisions.

For instance, groups of local vending machines suddenly running low on one kind of soda provides valuable demographic data that can be sold to a beverage provider. Roll up dozens of machines across a city and correlate data with what’s on TV or which concerts are in town…and maybe the IoT says Lady Gaga fans prefer Diet Coke.

Example Gateway: ADLINK MXC-2300

The aforementioned gateway is more than a shoebox stuffed with flexible hardware.  It is this, of course, but much more is required. Knowledge of myriad legacy industrial systems is needed in order to properly interface with them. ADLINK, one of Intel’s few Premier partners in the Intel Internet of Things Solutions Alliance, provides rugged board and system products to many related industrial IoT applications and industries (Figure 1).

Figure 1: ADLINK products spanning myriad market segments that will eventually connect to the IoT. Domain knowledge is essential when interfacing to legacy sensors and equipment.

Figure 1: ADLINK products spanning myriad market segments that will eventually connect to the IoT. Domain knowledge is essential when interfacing to legacy sensors and equipment.

The company’s HPERC rugged chassis fit the gateway model flawlessly, while the MXC-2300 Atom E3845-based Matrix “expandable computer” (Figure 2) has enough I/O options to connect to those 85% existing IoT nodes mentioned above. Additionally, as Intel catalyzes the IoT with their Intel Gateway Solutions for the Internet of Things integrated solution, ADLINK is certain to include the obligatory Wind River Intelligent Device Platform XT software plus McAfee’s Embedded Control software for security and manageability.

Figure 2: The modular, fanless MX-2300 “Matrix” chassis makes an ideal IoT gateway to interface legacy industrial equipment, sensors and “nodes”.

Figure 2: The modular, fanless MX-2300 “Matrix” chassis makes an ideal IoT gateway to interface legacy industrial equipment, sensors and “nodes”.

Most importantly, ADLINK has something for the IoT few suppliers have: a remote control and management system built into every module called SEMA Cloud. This Smart Embedded Management Agent and its PICMG-based EAPI API, available on both x86 and ARM ADLINK modules, provides remote M2M/IoT connectivity via command line, GUI or HTTP (Figure 3). Essential for controlling the gateway and IoT nodes are: watchdog; failure forensics; fail-safe dual BIOS; info/stats such as CPU type; module SERNO and uptime; temp monitor and fan control; separate I2C controller; power monitoring and control; and more.

Figure 3: IoT use cases for ADLINK’s Smart Embedded Management Agent software and API.

Figure 3: IoT use cases for ADLINK’s Smart Embedded Management Agent software and API.

Gateway is a Drug for IoT Revenue

With so many billions of devices ready to spew their data onto the Internet, companies can scarcely contain their rabid enthusiasm to start monetizing all that data via information and action. The IoT gateway—especially targeting the large Industrial segment—is an essential piece in the cloud picture. Companies like ADLINK have the experience, hardware, and infrastructure software necessary to utilize all that aggregated IoT data.


Samsung Galaxy Gear Teardown

Surprise! ST Micro’s ARM Cortex M4 drives Galaxy Gear smart watch.

Now that Samsung has recognized users want to read the watch on their wrist like a regular watch (unlike their Gear Fit), this wearable has a chance of gaining traction. Wearables are heating up as fitness bracelets are replaced with more functionality in smart watches.

(Courtesy: Samsung.)

(Courtesy: Samsung.)

On the heels of Google’s round wearable concept, and while everyone waits for Apple to say something interesting, Samsung’s new Galaxy Gear was torn apart by the folks at ABIresearch.

The following graphic from ABIresearch provides the summary of their full report. For other wearables, check out this link.

ABIresearch's summary of the Samsung Gear teardown. {Courtesy: ABIresearch; all rights reserved.)

ABIresearch’s summary of the Samsung Gear teardown. {Courtesy: ABIresearch; all rights reserved.)

What’s in a name? We go from ADLINK to IoT.

There’s so much more than meets the eye to ADLINK; perhaps it’s why they’re one of only five Premier partners to Intel…and a likely leader in the coming embedded IoT phenomenon.

Editor’s note: this particular blog is sponsored by ADLINK, but the opinions represented are my own.

[UPDATE 6-26-14 4:57pm: The Intel Intelligent Systems Alliance is now called the Intel Internet of Things Alliance. This change was announced a few months ago by Intel.]

The ‘iceberg analogy’ applied to OEM ADLINK means that there’s much more substance below the water than what’s immediately visible. After more than 20 announcement “blips” on my PR meter recently, I dug a bit deeper into the company that started out in 1995 in test and measurement building A/D/A equipment and boards. (ADLINK meant “the link between analog and digital”.) The public company moved quickly into industrial computing and a steady 20% CAGR.

There’s much more IP to ADLINK than is immediately apparent. (Courtesy: Wiki Commons.)

There’s much more IP to ADLINK than is immediately apparent. (Courtesy: Wiki Commons.)

I know of ADLINK mostly as providing VME, VPX, COM Express, and PC/104 modules for rugged and harsh systems. They bought PC/104 creator Ampro in 2008, bringing that company’s robust COTS innovations in PC/104 and SWAP-C chassis (size, weight, power, cost) under the ADLINK logo. ADLINK is now the world’s second largest PC/104 supplier.

In 2012, ADLINK purchased Europe’s LiPPERT to round out their rugged SBC expertise and get closer to Intel. LiPPERT brought along a device-to-cloud framework (M2M stack, API, BIOS, web server, etc) called Smart Embedded Management Agent (SEMA) that seems ideal for remote control and query of IoT devices. SEMA, it turns out, works with both x86 and ARM—the architectures that will likely dominate the emerging IoT.

These acquisitions proved that rugged, of course, applies to lots of places besides mil/aero and defense; for example, automotive, transportation (mass transit), medical, industrial, and energy. Upon my closer inspection, I found the ADLINK ‘iceberg’ revealed products and customer testimonials in all of these markets, plus a pattern of innovation and unique intellectual property (IP) like SEMA.

This unique IP isn’t found at most of ADLINK’s competitors: I see it setting the stage for ADLINK’s morphing into an IoT/M2M systems supplier.

Not Buzzed on IoT

Fads evolve quickly, and the Internet of Things was M2M last year, and before that it started as Intel’s Intelligent Systems. (Intelligent Systems is also the name of the Alliance of which ADLINK is a Premier Member.)

Pull quote ADLINKIntel merely invented the phrase that envisioned millions of connected embedded systems—some with intelligence, some merely as endpoint sensors. Two key points: 1. Intel was bang-on correct that IoT is no mere fad but a fundamental, sustainable market shift; and 2. I believe ADLINK is supremely well positioned to capitalize on the IoT.

Intel has bets on the IoT in more places than I can list here, from Core, Atom and Quark CPUs and SoCs, to Wind River Intelligent Device Platform, to McAfee’s Embedded Security. Alphabet soup, for sure, but Intel also relies on partners—Premier Partners, those that are privy to Intel’s own iceberg—to round out the IoT ecosystem.

The collection of press releases I’ve received year-to-date point to ADLINK’s complete ‘iceberg’ for the IoT.

The company has hardware, software and systems that touch all of the following: sensor, endpoint note, IoT gateway, network/cloud communications, remote diagnostics, boards/chassis/systems, and demo/application code. And a deep knowledge of how to build rugged, survivable systems.

Over the next several months, I’ll be exploring and sharing with you ADLINK’s technology and IP for the embedded IoT and cloud.

You’ll see that the company’s technology ‘iceberg’ extends well below the water line.


Efficient Signal Switches are Today’s Digital Traffic Cops

Betcha your next embedded design has or needs a signal switch. Here’s a snapshot of what to look for. The devil’s in the details.

Editor’s note: This particular blog is sponsored by Pericom.

Like a traffic cop, signal switches route GHz data quickly and efficiently. (Courtesy: Wiki Commons.)

Like a traffic cop, signal switches route GHz data quickly and efficiently. (Courtesy: Wiki Commons.)

Planning for an upcoming vacation, I got to thinking about big cities at rush hour. An experienced policeman with a whistle and urgent hand signals can route traffic at busy 4-, 5- even 8-way intersections efficiently with little delay and no pile-ups.

At first glance, there’s nothing sexy about a traffic cop in uniform and helmet moving all those cars across the intersection.

But the efficiency with which it’s done is elegant poetry: timed perfectly, no collisions, and cars get from point A to point B quickly (and they never have to reverse course). Signal switches are the same thing. Not terribly exciting on the surface, but one has to appreciate their elegance and efficiency.

Pull quoteSo it is with a modern digital signal switch. In darned near every embedded system you’ve got all these USB hubs, PCI root complexes, multi-board backplanes, multi-page/bank memories, and PCIe-to-“GHz” digital channels interoperating and passing data.  And they all need signal switches of some sort. The switch is a one-to-many MUX, it moves data in a non-blocking way, and it neither throttles nor degrades the signals as they pass through the ports.

The key parameters of signal switches are:

  • high-speed throughput,
  • maximum signal integrity and low insertion loss,
  • flexible fanout (x inputs to y outputs), and
  • adherence to standards like USB 2.0/3.0, PCI Express, 10/100/1G Ethernet, and others.

Heavy_Switch_1I’m gaining a new appreciation for traffic cops and signal switches. Consider perhaps the simplest example of how a switch is used and how its parameters matter. The PCIe 3.0 2:1 MUX shown below routes a single input (of 2 channels) to two outputs (of 2 channels). Think of a big, clunky hard-wired A-B rotary switch and you get the idea. Simple, right?

PCIe 3.0 MUXThis kind of switch is commonly used to bridge different PCIe busses on the same card, or connect/enable different backplane slot cards, or move data between on- and off-card (mezzanine?) resources. In other words: it’s used in lots of places.

But PCIe 3.0 is based on differential, LVDS signals running 8B/10B encoding and moving data at an RF-like 8 Gb/s (8 giga-transfers/sec, to be precise). At this frequency, signals degrade, are attenuated, reflect and bounce around, and the signal “eye” can easily close—meaning poor speed and signal integrity; so much so that the signals may not even resemble PCIe at the output ports. So our “simple” MUX signal switch needs to move the data quickly, cleanly, and appear to the host/targets as if it wasn’t even there!

Switches like the PI3PCIE3212 from Pericom Semiconductor, have covered all the details, freeing designers to map their signal architecture without worrying about the nuances. What about all those dB numbers @ frequency in the figure? They add up to hassle-free designs and signals that move from A-to-B without any designer thought.

Sort of like cars at a busy intersection. When that cop is there waving his hand, data traffic is under control and moves smoothly. Your signal switch behaves similarly.

TI emphasizes “KISS” in new Wi-Fi ICs

Low cost is a “given”; TI instead focuses on the “simple, stupid” part of the connected IoT.  New Internet-on-a-chip Wi-Fi ICs.

By: Chris A. Ciufo

Hey, this IoT thing has got me really stoked. As a long-time geek, I’ve been hard-wiring automated stuff since I was a kid. Surrounded by my app-enabled Xfinity CATV and my AirPlay-connected home theater, I’m anxious to add some door cams, a remote controlled overhead garage door, basement temperature and flood sensors, and…so much more!

But if every embedded sensor, doodad, HVAC and industrial machine on the planet is to be connected to the Internet—which is the goal of the Internet of Things/Everything (IoT)—the ICs to connect them have got to be cheap. As in a couple of bucks per connection in high volume.

But more importantly, it’s got to be easy for non-RF designers to add Wi-Fi into their products. Can you imagine if every 110VAC replacement plug from Home Depot had built-in Wi-Fi? I’d pay $5-10 for one of those. How about a light switch? Ceiling fan? The office shredder? The burbling Zen water feature on the receptionist’s desk?

Most of these embedded “wannabe nodes” were created by engineers who’ve never before designed with Wi-Fi. Nor do they understand the hundreds of APIs needed for the most basic TCP/IP connection.

Or: how likely is it that designers have experience with IoT security requiring lock down to protect factory automation or your nanny cam? Forget it; Wi-Fi’s 3AES and the Internet’s TLS/SSL security is more complicated than the whole device itself!

TI is embedding new “Internet on-a-Chip” Wi-Fi ICs with the KISS principle: keep it simple, stupid. But price matters, too.

TI’s SimpleLink is “Internet on a Chip”

Available “with” (CC3200) or “without” (CC3100) an embedded ARM Cortex A4 MCU to run apps like email, SMS or a web server, TI’s new all-in-one SimpleLink Wi-Fi ICs make easy for designers all that complicated Wi-Fi and Internet stuff.  They’re easy on price so the “cheap” part is covered. The CC3100 is $6.70 @ 1KU; the CC3200 is about $8.00 for 1,000.

Texas Instruments’ new SimpleLink “Internet on-a-Chip” Wi-Fi devices. The CC3200 includes an application processor that can run email, SMS, a web server, and more. (Courtesy: TI.)

Texas Instruments’ new SimpleLink “Internet on-a-Chip” Wi-Fi devices. The CC3200 includes an application processor that can run email, SMS, a web server, and more. (Courtesy: TI.)

Keeping in mind “KISS”, according to Dana Myers, Channel Marketing and Product Manager for TI’s Wireless Connectivity Group, the company recognizes how difficult Wi-Fi can be to design into a system. If the IoT is ever to find its way into the all-around-us devices mentioned above, the design-in process must be easy.

The Internet of Things/Everything (IoT) is growing to add connectivity into all kinds of embedded devices. Each will become a connected “node”…only if it can be connected to the Internet. (Courtesy: TI).

The Internet of Things/Everything (IoT) is growing to add connectivity into all kinds of embedded devices. Each will become a connected “node”…only if it can be connected to the Internet. (Courtesy: TI).

According to Myers, “TI has done the hard work for designers.” For example, a mere one API is needed to handle Internet security protocols (versus “hundreds” if hand coded). Further examples of how TI has dramatically simplified things are shown in below.

SimpleLink devices emphasize the KISS principle: “keep it simple, stupid”. Adding Wi-Fi to an embedded device has never been simpler. (Courtesy: TI.)

SimpleLink devices emphasize the KISS principle: “keep it simple, stupid”. Adding Wi-Fi to an embedded device has never been simpler. (Courtesy: TI.)


Better than IEEE 802.15.4 and BLE

If Wi-Fi is to be the “last mile” of cloud connectedness to the IoT’s billions of devices, it will have to displace other wireless technologies. The collection of IEEE 802.15.4 “personal” network standards that include ZigBee and 6LoWPAN—plus the newer Bluetooth Low Energy standard (BLE)—are not competition for Wi-Fi.

“The reason,” said TI’s Myers, “is that Wi-Fi is already installed in most locations where the devices are.” And the 802.15.4 and BLE standards are reserved for “personal range” lower rate connectivity than Wi-Fi. And while most IoT sensors will wake from sleep and broadcast only small burst packets (in other words: not much M2M data), some IoT devices may consume loads of bandwidth. Wi-Fi’s advantage then is that it is low cost, ubiquitous, has long range, and is a fat pipe.

Yet Wi-Fi’s Achilles Heel has been its power consumption. Just look at your 4-hour connected laptop to convince yourself of how much power connectivity can burn.

One Year on Two “AA” Batteries

Besides making Wi-Fi cheap and easy, TI will make it long-lasting, too. The company states the intention of “bringing Wi-Fi power to a new low” with a year’s worth of connectivity on just two AA alkaline batteries.

The “always connected” use case (left, Figure below) shows 125 μA sleep current while still connected to the network. This is possible for up to 2 seconds at a time between Wi-Fi beacons (20x better), versus the typical 100 ms sleep period. While awake, the CC3100 Internet on-a-Chip burns a mere 37 mA awaiting Rx beacon reception.

Boasting a year’s worth of battery life on two alkaline AA batteries, the CC3100 and CC3200 employ some slick power conservation modes. Note: in the left-hand figure, the sleep current is 125 uA (not 120 uA) according to a TI spokeswoman. (Courtesy: TI.)

Boasting a year’s worth of battery life on two alkaline AA batteries, the CC3100 and CC3200 employ some slick power conservation modes. Note: in the left-hand figure, the sleep current is 125 uA (not 120 uA) according to a TI spokeswoman. (Courtesy: TI.)

In true M2M sensor mode, an “intermittently connected” node will burn just 4 μA in hibernation, requiring only 95 ms to wake up and establish a secure Wi-Fi connection. Add another 105 ms onto that and the network processor IC has established a secure TLS connection to the Internet.

All of these numbers—power consumption, sleep and hibernation current, and time to establish cloud connectivity—are impressive.

An Entire Ecosystem of SDKs, HDKs, Apps

Since the goal with these new devices is simplicity for designers, TI is making available both Launchpad (base cards with MCUs) and BoosterPack (mezzanine cards with I/O), plus over 30 sample applications. Apps range from email and SMS, to an integrated web server. Other applications are possible.

TI has also partnered with cloud aggregators like Exosite, IBM, Xively and others. This assures “big data” remote manageability of M2M notes and communication with the CC3100 and CC3200 ICs. When asked if TI plans on releasing its own MQTT protocol and cloud dashboard, the TI spokeswoman merely replied “no plans, right now”.

But at the rate TI’s going by pushing down the barriers to Wi-Fi connectivity—in price, simplicity, ease-of-use, and security—it’s only a matter of time before the company adds more SimpleLink goodies.

They’re really following the “KISS” principle.