What I want to see at EE Live! / ESC

On the heels of CES, Embedded World, MWC and SXSW—“embedded” is no longer the fair-haired stepchild behind PCs. The market is embedded.  Here’s what I’m looking for at this week’s EE Live! / ESC conference and exhibition.

Rant:  I hate typing that silly exclamation point in the conference’s name. Think it’d be funny to replace it with a question mark? As in “EE Live?” Not sure what question this would be asking, any more than I have a clue why the good folks at UBM added the “!”.

Here’s my list of wanna-sees at EE Live! 2014 and my reasons why.

8-bit MCUs live on. Microchip's latest PIC MCU family adds peripherals, op amps, and analog.

8-bit MCUs live on. Microchip’s latest PIC MCU family adds peripherals, op amps, and analog.

  • The IoT, IoE, M2M, connected everything.  I expect to see more small module makers announcing connectivity options and remote control smartphone apps. The emphasis will be on extreme battery life for some systems (such as Bluetooth Low Energy), the aggregation of myriad roll-your-own sensors, and connectivity. While I saw all kinds of Wi-Fi and cellular add-on boards at the 2013 Telit Wireless Devcon and wrote about Qualcomm here, I’ve yet to see much emphasis on wireless connectivity in the general embedded market. I expect it to happen in a groundswell…eventually.


  • Anything which contains low power from Intel such as Bay Trail Atoms or Quark or whatever. I know Intel is winning designs, but are they really getting traction against ARM-based SoCs?


  • Is the RTOS dead? Wind River now has VxWorks 7 and it’s oriented around the IoT. Wind River is great at spotting trends and I consider them credible. So is the need for determinism really all that important in most embedded systems? Betcha not, but I want to test that theory.


  • Embedded security…or at least the glimmer of awareness. I harp on this all the time. Vendors from Mocana to Icon Labs (one of our bloggers) to GD’s Open Kernel Labs and Wind River understand security. But what about the other 98% of the embedded market? Let’s see if they’re getting the message yet. Or do we need a few dozen more Target stores security breaches to wake up the designers.


  • Something that’s not a commodity and the same as everyone else’s product. An SBC’s an SBC. But this year at EE Live I’m meeting with scores of new-to-me companies like Cymbet (solid state micro energy sources) and Newark Element14 with their RIoTboard.org initiative.  At the risk of sounding elitist, show me something new!


  • Sensors and stuff: if this whole Internet of Things thing is real (it is), all those intelligent hubs and connected pipes will eventually terminate at…something. That endpoint will measure voltage, temperature, pressure, wigwams/hour, or whatever. I’m expecting to see some interesting new sensors, collection of sensors, or endpoint devices/systems.


  • Demo boards:
    The new RioTboard.org initiative adds Android to development boards. Think Raspberry Pi with a GUI.

    The new RioTboard.org initiative adds Android to development boards. Think Raspberry Pi with a GUI.

    Inforce Computing, Gumstix, Element14, and more should be at EE Live! talking up ways to get neophytes up to speed on their next system. But equally important will be the academic, hobbyist, and engineer-at-night users who will create the next iRobot idea or company.


  • The demise of 8-bit:  it’s been rumored since I ran an 8/16-bit MCU group at Sharp 10 years ago, yet 8-bangers are as popular today (in consumer white goods, for instance) as they’ve ever been. Microchip just announced their latest 8-bit PIC MCU with intelligent analog and a surfeit of peripherals. I’m anxious to ask them why, but they’re on a financial roll lately and have few stumbles. Maybe 8-bit is here to stay.


  • And finally, a retrospective is always fun. Whatever happened to former ESC (before it was EE Live! that is) initiatives like: Machine vision, Multicore, and Virtualization?  They’ve either disappeared or gone mainstream.  I think the latter, and I’m hoping to see how they’ve been integrated.

I’ll update you as I wander the aisles of EE Live! / ESC and meet with embedded suppliers.



PCI Express Switch: the “Power Strip” of IC Design

Need more PCIe channels in your next board design? Add a PCIe switch for more fanout.

Editor’s notes:

1. Despite the fact that Pericom Semiconductor sponsors this particular blog post, your author learns that he actually knows very little about the complexities of PCIe.

2. Blog updated 3-27-14 to correct the link to Pericom P/N PI7C9X2G303EL.

Perhaps you’re like me; power cords everywhere. Anyone who has more than one mobile doodad—from smartphone to iPad to Kindle and beyond—is familiar with the ever-present power strip.

An actual power strip from under my desk. Scary...

An actual power strip from under my desk. Scary…

The power strip is a modern version of the age-old extension cord: it expands one wall socket into three, five or more.  Assuming there’s enough juice (AC amperage) to power it all, the power strip meets our growing hunger for more consumer devices (or rather: their chargers).


And so it is with IC design. PCI Express Gen 2 has become the most common interoperable, on-board way to add peripherals such as SATA ports, CODECs, GPUs, WiFi chipsets, USB hubs and even legacy peripherals like UARTs. The wall socket analogy applies here too: most new CPUs, SoCs, MCUs or system controllers lack sufficient PCI Express (PCIe) ports for all the peripheral devices designers need. Plus, as IC geometries shrink, system controllers also have lower drive capability per PCIe port and signals degrade rather quickly.

The solution to these host controller problems is a PCIe switch to increase fanout by adding two, three, or even eight additional PCIe ports with ample per-lane current sourcing capability.

Any Port in a Storm?

While our computers and laptops strangle everything in sight with USB cables, inside those same embedded boxes it’s PCIe as the routing mechanism of choice. Just about any standalone peripheral a system designer could want is available with a PCIe interface. Even esoteric peripherals—such as 4K complex FFT, range-finding, or OFDM algorithm IP blocks—usually come with a PCIe 2.0 interface.

Too bad then that modern device/host controllers are painfully short on PCIe ports. I did a little Googling and found that if you choose an Intel or AMD CPU, you’re in good shape. A 4th Gen Intel Core i7 with Intel 8 Series Chipset has six PCIe 2.0 ports spread across 12 lanes. Wow. Similarly, an AMD A10 APU has four PCIe (1x as x4, or 4x as x1). But these are desktop/laptop processors and they’re not so common in embedded.

AMD’s new G-Series SoC for embedded is an APU with a boatload of peripherals and it’s got only one PCIe Gen 2 port (x4). As for Intel’s new Bay Trail-based Atom processors running the latest red-hot laptop/tablet 2:1’s:  I couldn’t find an external PCIe port on the block diagram.

Similarly…Qualcomm Snapdragon 800? Nvidia Tegra 4 or even the new K1? Datasheets on these devices are closely held for customers only but I found Developer References that point to at best one PCIe port. ARM-based Freescale processors such as the i.MX6, popular in set-top boxes from Comcast and others have one lone PCIe 2.0 port (Figure 1).

What to do if a designer wants to add more PCIe-based stuff?

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

‘Mo Fanout

A PCIe switch solves the one-to-many dilemma. Add in a redriver at the Tx and Rx end, and signal integrity problems over long traces and connectors all but disappear. Switches from companies like Pericom come in many flavors, from simple lane switches that are essentially PCIe muxes, to packet switches with intelligent routing functions.

One simple example of a Pericom PCIe switch is the PI7C9X2G303EL. This PCIe 2.0 three port/three lane switch has one x1 Up and two x1 Down and would add two ports to the i.MX6 shown in Figure 1. This particular device, aimed at those low power consumer doodads I mentioned earlier, boasts some advanced power saving modes and consumes under 0.7W.

Hook Me Up

Upon researching this for Pericom, I was surprised to learn of all the nuances and variables to consider with PCIe switches. I won’t cover them here, other than mentioning some of the designer’s challenges: PCIe Gen 1 vs Gen 2, data packet routing, latency, CRC verification (for QoS), TLP layer inspection, auto re-send, and so on.

It seems that PCIe switches seem to come in all flavors, from the simplest “power strip”, to essentially an intelligent router-on-a-chip. And for maximum interoperability, of them need to be compliant to the PCI-SIG specs as verified by a plugfest.

So if you’re an embedded designer, the solution to your PCIe fanout problem is adding a PCI Express switch. 

The man asked: “Did Intel Lose Altera?”

Altera has made hay around Intel’s 14nm tri-gate (FinFET) process advantages. Have Intel’s Broadwell delays pushed Altera away?

In a recent post entitled “Did Intel Lose Altera?” blogger Ashraf Eassa muses at investment site The Motley Fool that Altera “is crawling back to Taiwan Semiconductor [TSMC]” for Altera’s high end Stratix 10 devices. His post is based upon an article originally written in DigiTimes which I’ve been unable to locate. (This article is similar, but speculates about Apple turning to Intel.)

Altera's Stratix 10 relies on multicore ARM Cortex A53s, DSP blocks, OpenCL, and Intel's 14nm Tri-Gate process.

Altera’s Stratix 10 relies on multicore ARM Cortex A53s, DSP blocks, OpenCL, and Intel’s 14nm Tri-Gate process.

The point, I assume, is Intel’s recent stumble with 14-nm Broadwell CPUs which were originally planned for Q4 2013, with production in Q1 2014 (now), but could possibly be postponed to Q4 2014 (says DigiTimes here).  3D transistors at this fine geometry are approaching rocket science so any delay even by the mighty Intel is not surprising and I’d consider pretty uneventful.

Except I’m not Intel–with a predictable Tick-Tock roadmap and that whole Moore’s Law thing–nor am I running Altera. The FPGA company Altera, of course, is trying to one-up Xilinx who so far is sticking with TSMC’s 20nm goodness.

  • I offer some good insight into Altera’s Stratix 10 plans for Intel’s foundry here.
  • For some insight on Xilinx’s UltraScale products in TSMC’s process, read here.

All of this is playing out under the microscope of Intel’s shallow penetration into the mobile (smartphone and tablet) markets where ARM-based SoCs from Nvidia, Qualcomm and others seem to obviate the dramatic advances Intel has made with their Bay Trail and Quark roadmaps.  So Intel can’t afford any bad press to the financial community.

I’m a fan of Intel, believe they’ll dominate again, and give them kudos for countless new-generation Atom design wins in Windows 8.1, Android, Tizen and other mobile devices. And I’m stoked about how Intel is evolving the all-day battery UltraBook into the 2:1 laptop/tablet. Still, ARM’s licensees dominate this landscape.

So we’ll keep an eye on this story since it intersects several others we’ve posted in the past few months.


Smile, You’re on an AEQ-corrected Analog Camera

Despite the trend towards IPTV and VoIP, most CCTV surveillance cameras are analog. Signals degrade quickly and quality suffers. Until now.

Editor’s note: This blog is sponsored by Pericom Semiconductor.

It’s estimated by Pericom Semiconductor that 80 – 90 percent of the low cost video surveillance market still uses analog video cameras and not digital IPTV. They’re everywhere: from parking garages and street corners, to indoor malls and police cruisers. The cameras keep getting better; their long-distance cable signal quality doesn’t.

New features in analog cameras include stereo audio, in-camera signal pre-conditioning, and much higher resolution images: D1 (720 x 480, NTSC) is 4x better than CIF, and WD1 (wide D1) adds 34 percent more pixels (960 x 480, NTSC). These add up to more information per frame at 30 fps, which means even more data to stuff down long cable runs before termination at the digital video recorder (DVR) receiving end.

But analog cable runs always end up the same way: with lousy, lossy, signals.

Using low-cost COAX in low-budget installations, signals degrade at 1.6dB/100 feet. So in the length of a football field (300 feet), the video at the receiving DVR or security console is getting poor (Figure). 300 feet is pretty short when you consider cables snaked up/down pillars, into attics, or through myriad cable-to-cable noise-injecting connectors.

Long cable runs and analog video cameras: bad data is what you get without an Rx amplifier or other digital enhancement.

Long cable runs and analog video cameras: bad data is what you get without an Rx amplifier or other digital enhancement.

Ironically, the solution to this signal loss is often to add expensive analog amplifiers at the receiving end—at the cost of  $5 – $10 per camera channel. So much for low-cost surveillance. A typical 16-32 camera system tacks on up to $320 in amplifiers. Better quality cables help, but run length is the killer.

Pericom Semiconductor, a company exclusively focused on signal integrity products, includes adaptive equalization (AEQ) algorithms into several digital video and audio/video decoder ICs.   The company claims that proprietary adaptive filters improve signals by 2x, translating to either double the picture clarity or decent analog video at twice the usable cable length.

Compared to an analog amplifier, a company spokesman told me, designing in the ICs into a DVR receiver will save tens of dollars per video channel and improve performance. Voila! Clean analog signals over long cable runs on a budget.

So look sharp the next time you spot a video surveillance camera: it’s possible your smiling mug may be more clearly recorded than you think, thanks to the AEQ signal processing available for plain old analog video cameras.




IHS Embedded Ranks VME/VPX Suppliers

With vendor-supplied data, analyst firm IHS ranks the largest embedded suppliers in the VME/VPX market.

[Update 22 Jan 14: Replaced figure with the original slide from IHS; added a link to the entire IHS presentation here.  C. Ciufo ]

At today’s Embedded Tech Trends insider conference in Phoenix, IHS senior analyst Toby Colquhoun revealed the top suppliers in the VME and VPX market space for the year ended 2012 (the latest data available). The conference is sponsored by the standards organization VITA that’s responsible for these open standards.  It’s always a challenge to get quantitative data on this niche market which primarily services the world’s rugged military and aerospace markets with harsh environment modules, connectors and systems.

GE Intelligent Platforms is the largest supplier when VME, VPX and systems are combined, followed by: Curtiss-Wright Controls Defense Systems, Mercury Computer, Kontron, and Emerson Network Power (Figure 1, with apologies for the quality).

IHS ranking of VME and VPX suppliers for 2013, as presented at Embedded Tech Trends conference. (Courtesy: IHS, VITA, ETT.)

IHS ranking of VME and VPX suppliers for 2013, as presented at Embedded Tech Trends conference. (Courtesy: IHS, VITA, ETT.)

Toby also indicated that the VME market is shrinking, as legacy designs migrate to VPX modules and systems. In the VPX-only market for modules and systems, the ranking changes to:

1. Curtiss-Wright at 38 percent

2. GE Intelligent Platforms at 19 percent

3. Mercury Computer at 16 percent.

This ranking is consistent with my own expectations (and CW’s recent press releases proclaiming themselves as number one). Interestingly, when I asked the question about small form factor systems like those from these same suppliers, plus ADLINK, Advantech, MEN Mikro and others, Toby responded that IHS doesn’t see that these kinds of rugged systems are encroaching on the VME/VPX market. I disagree, but can’t quantify that just yet.

We’ll update this data once we receive the actual presentation later today.


Some insight into Altera’s Stratix 10 plans

Hint: Intel’s 14nm tri-gate (FinFET) process is at the core (no pun) of Altera’s recipe, but architecture and software tools round out new FPGA family plans.

Figure 5 Altera SoC roadmap (PNG)First to announce plans for a quad-core ARM Cortex A53-based SoC FPGA, Altera will rely on their Intel fab exclusivity to provide what an Altera spokesman called “unimaginable performance”. One of the titans in the FPGA market (the other is Xilinx), Altera has been slowly opening the curtain on their roadmap plans.

I’ve been following and reporting on Altera’s announcements, acquisitions, and possible strategies for the last 12 months. Now, all is revealed in the company’s Stratix 10 technology announcement. An in-depth report (with links) is available here.

Editor’s note: While Altera is announcing their technology plans, Xilinx announced new 20nm devices in Virtex and Kintex UltraScale devices. Our in-depth report on Xilinx will follow shortly.  C. Ciufo, editor.

Can industrial imaging software benefit military SIGINT analysis?

Software creates a height map from a 2D image.

Software creates a height map from a 2D image.

I received a press release today from Olympus Industrial Equipment Group (the camera guys) about an update to their image analysis software used with industrial microscopes. Who knew Olympus made microscopes? This is not normally my area of expertise.

However…the Olympus Stream image enhancement software has some pretty awesome capabilities that make me wonder if this COTS software could be used (or adapted) to work in military/aerospace signals intelligence (SIGINT) or reconnaissance imagery analysis. After all, the key part of C4ISR is not capturing the (image) data, it’s analyzing the images to make meaningful decisions. For instance: was there a truck parked there yesterday? Has that patch of grass been matted down by a vehicle or group of humans?

As well, images often need to be enhanced due to poor lighting, dust or fog obfuscation, and finely measuring distances would be handy too.

HDR image enhancement in the Olympus Stream microscope software might benefit military image analysts. Note how this sample looks like a satellite image of a  plot of land.

HDR image enhancement in the Olympus Stream microscope software might benefit military image analysts. Note how this sample looks like a satellite image of a plot of land. (Courtesy: Olympus; YouTube.)

The 1.9 version of the Stream software adds these features: Automatic Measurement and Coating Thickness. “Automatic Measurement allows the creation of complex measurements using scanners by automatic detection of material edges and pattern recognition. This materials solution automatically measures distances, circle diameters, and angles between two lines. Automatic Measurement also supports the multiple stage location and sample alignment with OLYMPUS Stream Motion.”

A full-on (top-down) view of a sample. Image that this object is an enemy bunker.

A full-on (top-down) view of a sample. Image that this object is an enemy bunker. (Courtesy: Olympus.)

Now forget about the fact that someone is analyzing a hunk of metal covered with scratches that gouge hills and valleys out of the surface. Couldn’t this be an image of an earthscape with real hills and valleys? Might we want to measure the distance between some of these surface features? The software can also digitally adjust focus, change and enhance details in the image, and create 3D images using z-axis slices from the original image.

Image enhancement and 3D rendering from a 2D view and z-axis sensor slices.

Image enhancement and 3D rendering from a 2D view and z-axis sensor slices. (Courtesy: Olympus.)

To me, this COTS software has many features that U.S. DoD and CIA analysts need when analyzing recon images. I wonder if it could be used not in microscopes, but it tactical military scenarios.


Intel’s Atom Secret Decoder Ring

Intel’s code names have gotten even more confusing with the new Atom processors.

It used to be that Intel had one code name for a processor family or process technology variant and then the part number SKUs followed easily from that. Haswell, for instance, is the 4th Generation Core family and the SKUs are 4-digit numbers starting with “4″. Ivy Bridge was 3rd Generation with 4-digit SKUs starting with “3″. And so on.

The Atom family has changed all of that and I’m confused as hell. Every time I see a new Atom SKU like “C2000″ or “E3800″ I have to do some research to figure out what the heck it actually is.  For some reason, Intel has split the Atom family into mobile, value desktop, microserver, and SoC versions. I’ve yet to find a comprehensive comparison chart that maps the code names (and former code names) to SKUs, markets, or other useful quick-look info.  The chart probably exists somewhere on the massive Intel website(s) ecosystem. Or in a PowerPoint presentation presented at an overseas conference. Or maybe not.

Here are a few hints, but I won’t even pretend that this is accurate or comprehensive.

The Artist Formerly Known as Bay Trail

Intel tries to demystify this whole naming bug-a-boo with a sort-of useful table called “Products (Formerly Bay Trail)”.

Intel attempts to de-mystify Atom's myriad code names and SKUs. I'm not sure it helps much.

Intel attempts to de-mystify Atom’s myriad code names and SKUs. I’m not sure it helps much because you have to drill down in each instance and there’s no market segment mapping (you’d need the Press Release to do that).

Bay Trail is the newest 22nm Atom designed for mobile, value desktop and the sorts of applications you’d expect Haswell’s baby brother to target. But there are also “Pentium” versions (J and N versions) and Celeron versions (N). Intel is targeting these at desktops, low-end laptops and other “value” platforms that can’t bear the price of Ivy Bridge or Haswell CPUs and chipsets.

Bay Trail Atoms also come in E and Z versions. E38xx was just launched and is called the “SoC” version, is based upon Bay Trail’s Silvermont microarchitecture, and has a TDP of 10W targeting embedded applications. The Z versions are aimed at tablets–exactly the target you’d expect for Intel’s flagship low power CPU.

Atoms to Protect and Server

Then there are the C2000 Atom versions. There are two flavors here, broken down by market segment. They’re all 22nm Atoms, but the C23xx, C25xx and C27xx SKUs target servers–more specifically, the microservers where ARM is making headway. Intel’s got a leadership position in servers with Sandy Bridge (Gen 2), Ivy Bridge (Gen 3), and Haswell (Gen 4) CPUs…plus all manner of heavy weight Xeon server CPUs. So it’s essential to offer a competitive product to whatever ARM and their partners might throw at servers (such as the multi-threaded A53 or single-threaded, deep pipeline A57).

To confuse matters further, there’s the C2000 Atoms targeted at communications platforms. Bizarrely, Intel also calls them–wait for it–C23xx, C25xx, and C27xx. Could they not have changed a few digits around to protect designers’ sanity if only to obviate the need to look them all up?

These Atoms aren’t Bay Trail at all–they’re the former “Avoton” coded Atoms and they’re definitely not aimed at mobile like Bay Trail. As I dug a bit deeper to try to figure this out, more code names like Rangeley popped up. Along with an Avoton block diagram that showed the same Bay Trail Silvermont core surrounded by Avoton I/O resources all labeled “Edisonville”. Avoton? Rangeley? Edisonville?

(Sigh.) At that point I decided to stick with the Bay Trail embedded versions for now and forget about the networking and communications versions before my head exploded. I’ll dig into this again with a fresh perspective and see if I can find a roadmap slide that makes this all clear.

If you can suggest some links–better yet, Intel charts–that stitch the Atom family into all of its permutations please send me a link. I’ll post your name with fanfare and gratitude.

In the meantime, be sure to always check www.ark.intel.com as your first SKU reference. It won’t map part numbers to the all important market segments, but it’s a good start.

The Soft(ware) Core of Qualcomm’s Internet of Everything Vision

Qualcomm supplements silicon with multiple software initiatives.

Qualcomm Snapdragon
Update 1: Added attribution to figures.
The numbers are huge: 50B connected devices; 7B smartphones to be sold by 2017; 1000x growth in data traffic within a few years. Underlying all of these devices in the Internet of Things…wait, the Internet of Everything…is Qualcomm. Shipping 700 million chipsets per year on top of a wildly successful IP creation business in cellular modem algorithms, plus being arguably #1 in 3G/4G/LTE with Snapdragon SoCs in smartphones, the company is now setting its sights on M2M connectivity. Qualcomm has perhaps more initiatives in IoT/IoE than any other vendor. Increasingly, those initiatives rely on the software necessary for the global M2M-driven IoT/IoE trend to take root.

Telit Wireless Devcon
Speaking at the Telit Wireless Devcon in San Jose on 15 October, Qualcomm VP Nakul Duggal of the Mobile Computing Division painted a picture showing the many pieces of the company’s strategy for the IoT/E. Besides the aforementioned arsenal of SnapDragon SoC and Gobi modem components, the company is bringing to bear Wi-Fi, Bluetooth, local radio (like NFC), GPS, communications stacks, and a vision for heterogeneous M2M device communication they call “dynamic proximal networking”. Qualcomm supplies myriad chipsets to Telit Wireless, and Telit rolls them into higher order modules upon which Telit’s customers add end-system value.

Over 8 Telit Wireless modules are based upon Qualcomm modems.

Over eight Telit Wireless modules are based upon Qualcomm modems, as presented at the Telit Wireless Devcon 2013.

But it all needs software in order to work. Here are a few of Qualcomm’s software initiatives.

Modem’s ARM and API Open to All
Many M2M nodes–think of a vending machine, or the much maligned connected coffee maker–don’t need a lot of intelligence to function. They collect data, perform limited functions, and send analytics and diagnostics to their remote M2M masters. Qualcomm’s Duggal says that the ARM processors in Qualcomm modems are powerful enough to perform that computational load. There’s no need for an additional CPU so the company is making available Java (including Java ME), Linux and ThreadX to run their 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm has also opened up the modem APIs and made available their IoT Connection Manager software to make it easier to write closer-to-the-metal code for modem. Duggal revealed that Qualcomm has partnered with Digi International in this effort as it applies to telematics market segments.

Leverage Smartphone Graphics
And some of those M2M devices on the IoE may have displays–simple UIs at first (like a vending machine)—but increasingly more complex as the device interacts with the consumer. A restaurant’s digital menu sign, for example, need not run a full blown PC and Windows Embedded operating system when a version of a Snapdragon SoC will do. After all, the 1080p HDMI graphics needs of an HTC One with S600 far outweigh those of a digital sign. Qualcomm’s graphics accelerators and signal processing algorithms can easily apply to display-enabled M2M devices. This applies doubly as more intelligence is pushed to the M2M node, alleviating the need to send reams of data up to the cloud for processing.

Digital 6th Sense: Context
Another area Duggal described as the “Digital 6th Sense” might be thought of as contextual computing. Smartphones or wearable fitness devices like Nike’s new FuelBand SE might react differently when they’re outside, at work, or in the home. More than just counting steps and communicating with an App, if the device knows where it is…including precisely where it is inside of a building…it can perform different functions. Qualcomm now includes the Atheros full RF spectrum of products including Bluetooth, Bluetooth LE, NFC, Wi-Fi and more. Software stacks for all of these enable connectivity, but code that meshes (no pun) Wi-Fi with GPS data provides outside and inside position information. Here, Qualcomm’s software melds myriad infrastructure technologies to provide inside positioning. A partnership with Cisco will bring the technology to consumer locations like shopping malls to coexist with Cisco’s Mobility Services Engine for location-based Apps.

Smart Start at Home
Finally, the smart home is another area ripe for innovation. Connected devices in the home range from the existing set-top box for entertainment, to that connected coffee pot, smart meter, Wi-Fi enabled Next thermostat and smoke/CO detector, home health and more. These disparate ecosystems, says Duggal, are similar only in their “heterogeneousness” in the home. That is: they were never designed to be interconnected. Qualcomm is taking their relationships with every smart meter manufacturer, their home gateway/backhaul designs, and their smartphone expertise, and rolling it into the new AllJoyn software effort.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener. Courtesy: Qualcomm and AllJoyn.org .

AllJoyn is an open source project that seeks to set a “common language for the Internet of Everything”. According to AllJoyn.org, the “dynamic proximal network” is created using a universal software framework that’s extremely lightweight. Qualcomm’s Duggal described the ability for a device to enumerate that it has a sensor, audio, display, or other I/O. Most importantly, Alljoyn is “bearer agnostic” across all leading OSes or connectivity mechanism.

AllJoyn connectivity diagram.

AllJoyn connectivity diagram. Courtesy: www.alljoyn.org .

If Qualcomm is to realize their vision of selling more modems and Snapdragon-like SoCs, making them play well together and exchange information is critical. AllJoyn is pretty new; a new Standard Client (3.4.0) was released on 9 October. It’s unclear to me right now how AllJoyn compares with Wind River’s MQTT-based M2M Intelligent Device Platform or Digi’s iDigi Cloud or Eurotech’s EveryWhere Device Framework.

Qualcomm’s on a Roll
With their leadership in RF modems and smartphone processors, Qualcomm is laser focused on the next big opportunity: the IoT/E. Making all of those M2M nodes actually do something useful will require software throughout the connected network. With so many software initiatives underway, Qualcomm is betting on their next big thing: the Internet of Everything. Software will be the company’s next major “killer app”.

Intel’s M2M Vision: Bringing IT Knowledge into the Embedded Space for the IoT

With PCs waning, Intel’s got another bullet in its gun pointed at the Internet of Things: huge knowledge of moving, storing and managing enterprise data.

The market has focused on two of Intel’s obvious Achilles’ Heels: the lack of a low-power embedded mobile processor to compete with ARM, and the slow death of the PC as the centerpiece of our digital world. But as pundits (like me) grouse about Intel’s slow progress at righting a perceived listing product portfolio, it turns out the company has a cogent plan to capitalize greatly on bringing embedded to the Internet of Things (IoT). Intel will leverage its heavy resources in enterprise data.

Oh, they’ll still need the recently unveiled low-power Atom successors called Bay Trail and Quark. And there’s no time to waste in bringing the new two-in-one laptop/notebook/tablet concept to market this Christmas. But Intel has quietly been nudging other pieces around on the big board into what might be a winning strategy, including bringing the company’s deep IT enterprise experience to bear on a world that will rely on embedded to make the connections. That new world relies on infrastructure to make easy the task of moving, managing, abstracting, securing and monetizing all that connected machine-to-machine (M2M) data.

To learn firsthand about Intel’s plans for intelligent systems, I turned to Ryan Brown, director and chief of staff for Intel’s Intelligent Systems Group (ISG). At times Ryan is frustratingly vague because Intel has yet to announce concrete plans for M2M.

However, Intel’s still-nascent Intelligent Systems Framework for M2M is not only a set of connectivity recommendations, it will likely be first catalyzed by Wind River’s MQTT-based Intelligent Device Platform (IDP) first announced at IDF2012 and further codified at IDF2013 in San Francisco. As well, McAfee’s Embedded Control and Global Threat Intelligence products could possibly secure the end nodes or clusters of low-intelligence nodes controlled by local concentrators (Wind River and McAfee are both Intel companies.)

Finally, the new enterprise vision for Intel’s vPro adds the vector “productivity” to the IT value proposition—something we believe clearly emphasizes Intel’s focus on data that enters the enterprise from the Internet of Things. Although vPro currently relies on Ivy Bridge and Haswell processors running Windows, we expect Intel will roll vPro over to the Bay Trail Atom family and eventually onto Linux and Android operating systems (Figure 1). Supporting these embedded devices and platforms is the only way Intel can touch all the money-making parts of the M2M-connected Internet of Things.

(Edited excerpts follow.)

Figure 1: Intel's commitment to all things connected spans Windows, Google's Chrome, Android, MacOS and more, as shown during an IDF2013 keynote. (Photo by Chris A. Ciufo.)

Figure 1: Intel’s commitment to all things connected spans Windows, Google’s Chrome, Android, MacOS and more, as shown during an IDF2013 keynote. (Photo by Chris A. Ciufo.)

Ryan Brown, Chief of Staff for Intelligent Systems, Intel. (Courtesy: Intel and YouTube.)

Ryan Brown, Chief of Staff for Intelligent Systems, Intel. (Courtesy: Intel and YouTube.)

Q: We’ve been calling the transition from networked PCs over the Internet names like cloud connectivity, cloud computing, machine-to-machine, the Internet of Things, the Internet of Everything, and so on. Clearly everything is or will be connected to everything, and embedded devices will become increasingly important. What’s the official—if not evolving—view of what Intel calls “Intelligent Systems”?

A: There have been these compute model transformations going on over time. Whether it’s PCs, the Internet, mobile computing or all things that go around mobile computing, the next real transition point we think—despite what people call it (and you mentioned many of them)—is that all of these devices are becoming connected. Connected at the edge [of the cloud], connected to each other and up through some system to the cloud. We are currently using that [concept] as the umbrella under which Intel obviously has verticals. M2M is one of those verticals that’s very interesting to us from a technology and customer-need perspective. We can quote all kinds of numbers to justify how big the market could be but people are actually asking different questions such as: “Do I really want to connect this device?”

That’s a question that wouldn’t have even been on the radar before [the IoT]. So in that context, there are two pieces. One, there’s this opportunity for all of these things to get deployed, such as a new factory where the simple question is how to integrate the connectivity, security and the manageability pieces into the device. And a related question is what [technology] is needed to be able to move data into and out of the device into the bigger system?

The second piece is the existing infrastructure that many IoT advocates don’t want to talk about. That is: how do you actually get these things that are existing in the marketplace—like factories, transportation, infrastructure—connected together so they can actually communicate over LAN, 3G, Wi-Fi or other connection?

From Intel’s perspective, for both of these fundamental questions, we see a lot of opportunity for the industry to move forward rapidly by recognizing that all of this infrastructure will not be refreshed overnight. But users want to get some of the data off of the machines and use it. Ultimately the data will drive the transformation; it won’t be driven by merely bolting a new box on the side of a machine. How will the data allow me to do something differently with my business? What’s that data allowing me to do? How can I monetize it? Can I create a new service from it? How can I make different decisions?

Q: Are you saying that M2M is just one step in a fundamental shift going on in business?

A: From a big picture perspective…M2M is one of the internal-to-the-transformation vehicles that’s going to actually kick off the transformation I just mentioned, and Intel’s been paying attention to it for a long time. We’ve been placing devices with some companies to help them learn fast and for all of us to learn how to tweak [the data connectivity problem]. But let me be clear: the next transformation is going to take place because of the data. It will drive decisions on new equipment, new devices and new infrastructure but it will also drive change within existing structures. There’s a lot of value in connecting existing devices and this is what gives credence to the M2M mission of connecting devices together.

Q: Intel’s vision sounds in-phase with your partner Microsoft’s where Steve Ballmer revealed that all of Microsoft is writing code for the cloud. The company’s embedded roadmap shows various flavors of Windows Embedded and future Windows Embedded 8 products geared around data-centric verticals like point-of-sale (POS), handheld devices, the connected car and other small footprint (code-wise) devices such as smart meters or home automation gateways. (Figure 2 shows Microsoft’s mission and vision for intelligent devices.)

Figure 2: Microsoft's vision and mission for embedded focuses on the data. All Windows versions targeting embedded and various target verticals like POS target "touching" the data. Like Intel, Microsoft will leverage its experience in the enterprise markets.

Figure 2: Microsoft’s vision and mission for embedded focuses on the data. All Windows versions targeting embedded and various target verticals like POS target “touching” the data. Like Intel, Microsoft will leverage its experience in the enterprise markets.

What is Intel doing, product-wise?

A: We drive all of our silicon products into embedded markets. For example, the latest Haswell and before that Ivy Bridge and Sandy Bridge processors were applicable to embedded. Atom, in Bay Trail and Clover Trail versions, also apply to embedded. We’re working closely with Wind River and McAfee to deliver a more complete solution to the marketplace. And although Intel’s bread and butter is silicon—we do that really, really well—we’re working with our customers to understand what they need in this space.

I think the biggest challenge is actually looking at the industry problems. How can Intel go solve this? The fragmentation and diverse uses in these [M2M] markets, meshed with new uses and legacy environments is a huge challenge. Add in global and local market needs and standardization…these are all things that Intel has historically done to help drive markets. We’ve applied horizontal solutions to try to drive out some of the vertical market complexities.

Q: Can you be more specific?

A: To start, we are using the building blocks that we have on the silicon technology side and looking to apply them to the problems at hand. For example, I could look at a vending machine and make a case to install a higher-level Ivy Bridge or Haswell type device that can actually gather data from all the machine’s sensors and do some interesting filtering/balancing analytics on that data to help the machine’s owner make better decisions. Now, aggregate that over multiple points and the machines’ owner now has a better picture of usage patterns of their vending machines which becomes really compelling.

It’s not as much about “what’s Intel driving specifically” beyond Haswell, because some of the chip’s security and manageability features come into play [in this scenario]. The real question for me is how can we use Intel compute technology to help solve these new, emerging problems.

Q: Again, can you be more specific about what Intel is doing to catalyze the vision?

A: We are definitely working on some things; some of them I can’t necessarily comment on. What we’re trying to do is bring together the technologies at the right level so we can deploy and help customers solve the problem.

Let me define more of the problem so you can get a better idea of the complexity customers face who have not concerned themselves with these challenges before.

For instance: how do you go build an embedded M2M device, besides the basic CPU chipset, comms, software, protocol stack, antenna design and then actually deploy it into an operational environment that is not a traditional IT environment? We see there’s value in bringing together some of these pieces that customers are going to need. That’s about all I can say at this time.

Like I said: the value’s in the data. So the customer’s real question is: how can I get access to the data the fastest? How can I get my pieces deployed and get the fastest time-to-money and start making better decisions? We are using the great technology building blocks Intel already has like Haswell, which brings the power envelop way down. But it’s still a bit “heavy” for non-wired applications.

But as you go down into our Atom space you get into the Bay Trail solutions. The next-generation versions will bring some interesting power/performance capabilities [Figure 3]. We’re working very closely with Wind River Systems to bring some of their software pieces in and make sure they run really well and coordinated with Intel products; these [capabilities] exist today. On top of that, we’re pulling in the security features from McAfee like their Embedded Control. The key question there is how are we locking down the device at the base layer so that the end product only runs what the owner wants it to run? Intel is making sure that all this works together when the device is deployed and only runs the pieces and parts that it’s supposed to run. Further up the stack, McAfee’s Secure Defender becomes applicable.

Let me say it again: we want to make it easy for the customer to deploy these M2M solutions, and it’s really, really important to us that this all works well together.

Figure 3: Intel revealed only a peek of what's beyond the Silvermont core/Bay Trail Atom at IDF2013: a 14nm version called Airmont. (Photo by Chris A. Ciufo.)

Figure 3: Intel revealed only a peek of what’s beyond the Silvermont core/Bay Trail Atom at IDF2013: a 14nm version called Airmont. (Photo by Chris A. Ciufo.)


Q: What about Intel’s Intelligent Systems Framework and vPro for connectivity and manageability?

A: There’s more there, but nothing that I can talk about. We’re working in that space and we’re setting up some criteria for how systems should be thought of as you get into this space. We’ve been working closely with Wind River on their IDP and their M2M protocols. Intel has not been pointing people specifically to IDP because we’ll be supporting a broad range of products and vendors [besides just Wind River].

As for vPro on Atom, we continue to look at how we can scale some of that functionality down to Atom. We see that manageability is a feature set also desired at the hardware level, but there’s nothing that I can comment on right now.

Q: What software tools are available to help in this market transition? I’m thinking of Intel’s excellent but not-well-known tools to help designers wring out power savings in Windows-based platforms. Anything like this in the IoT/M2M area?

A: The one we can talk about was launched earlier this year at Embedded World in February is “Intel System Studio Integrated Software Suite,” but I’m not sure this is the kind of tool you’re referring to in the purely embedded space.

Q: Figure 4 shows a high level summary of what you’ve been describing. Any final comments?

A: We’ve been talking about how to make devices more intelligent and how to connect them with M2M control either directly or via a bolt-on device. The third piece—what are you going to start doing with all this data—is all about analytics and turning all that information into something “actionable.” Analytics touches all of these areas in the smart system of systems or smart intelligent systems. IoT and M2M spans all of these three nodes.

Figure 4: Intel's vision for M2M is more than just connectivity, it focuses on the data and it includes manageability and security, plus other attributes already found in the IT/enterprise space where Intel has substantial technology products like vPro, virtualization and more. Expect to see these rolled out with actual product names in the near future. (Courtesy: Intel, Smart Technology World, IDC.)

Figure 4: Intel’s vision for M2M is more than just connectivity, it focuses on the data and it includes manageability and security, plus other attributes already found in the IT/enterprise space where Intel has substantial technology products like vPro, virtualization and more. Expect to see these rolled out with actual product names in the near future. (Courtesy: Intel, Smart Technology World, IDC.)