Intel Changes Course–And What a Change!

By Chris A. Ciufo, Editor, Embedded Intel Solutions

5 bullets explain Intel’s recent drastic course correction.

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

I recently opined on the amazing technology gifts Intel has given the embedded industry as the company approaches its 50th anniversary. Yet a few weeks later, the company released downward financials and announced layoffs, restructurings, executive changes and new strategies. Here are five key points from the recent news-storm of (mostly) negative coverage.

1. Layoffs.

Within days of the poor financial news, Intel CEO Brian Krzanich (“BK”) announced that 12,000 loyal employees would have to go. As the event unfolded over a few days, the pain was felt throughout Intel: from the Oregon facility where its IoT Intelligent Gateway strategy resides, to its design facilities in Israel and Ireland, to older fabs in places like New Mexico. Friends of mine at Intel have either been let go or are afraid for their jobs. This is the part about tech—and it’s not limited to Intel, mind you—that I hate the most. Sometimes it feels like a sweatshop where workers are treated poorly. (Check out the recent story concerning BiTMICRO Networks, which really did treat its workers poorly.)

2. Atom family: on its way out. 

This story broke late on the Friday night after the financial news—it was almost as if the company hadn’t planned on talking about it so quickly. But the bottom line is that the Atom never achieved all the goals Intel set out for it: lower price, lower power and a spot in handheld. Of course, much is written about Intel’s failure to wrest more than a token slice out of ARM’s hegemony in mobile. (BTW: that term “hegemony” used to be applied to Intel’s dominance in PCs. Sigh.) Details are still scant, but the current Atom Bay Trail architecture works very nicely, and I love my Atom-based Win8.1 Asus 2:1 with it. But the next Atom iteration (Apollo Lake) looks like the end of the line. Versions of Atom may live on under other names like Celeron and Pentium (though some of these may also be Haswell or Skylake versions).

3. New pillars announced.

Intel used to use the term “pillars” for its technology areas, and BK has gone to great lengths to list the new ones as: Data Center (aka: Xeon); Memory (aka: Flash SSDs and the Optane, 3D XPoint Intel/Micro joint venture); FPGAs (aka: Altera, eventually applied to Xeon co-accelerators); IoT (aka: what Intel used to call embedded); and 5G (a modem technology the company doesn’t really have yet). Mash-ups of these pillars include some of the use cases Intel is showing off today, such as wearables, medical, drones (apparently a personal favorite of BK), RealSense camera, and smart automobiles including self-driving cars. (Disclosure: I contracted to Intel in 2013 pertaining to the automotive market.)

 Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

4. Tick-tock goodbye.

For many years, Intel has set the benchmark for process technology and made damn sure Moore’s Law was followed. The company’s cadence of new architecture (Tock) followed by process shrink (Tick) predictably streamed products that found their way into PCs, laptops, the data center (now “cloud” and soon “fog”). But as Intel approached 22nm, it got harder and harder to keep up the pace as CMOS channel dimensions approached Angstroms (inter-atomic distances). The company has now officially retired Tick-Tock in favor of a three-step process of Architecture, Process, and Process tuning. This is in fact where the company is today as the Core series evolved from 4th-gen (Haswell) to 5th-gen (Broadwell—a sort-of interim step) to the recent 6th-gen (Skylake). Skylake is officially a “Tock,” but if you work backwards, it’s kind of a fine-tuned process improvement with new features such as really good graphics, although AnandTech and others lauded Broadwell’s graphics. The next product—Kaby Lake (just “leaked” last week, go figure)—looks to be another process tweak. Now-public specs point to even better graphics, if the data can be believed.

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

5. Embedded, MCUs, and Value-Add.

This last bullet is my prediction of how Intel is going to climb back out of the rut. Over the years the company mimicked AMD and nearly singularly focused on selling x86 CPUs and variants (though it worked tirelessly on software like PCIe, WiDi, Android, USB Type-C and much more). It jettisoned value-add MCUs like the then-popular 80196 16-bitter with A/D and 8751EPROM-based MCU—conceding all of these products to companies like Renesas (Hitachi), Microchip (PIC series), and Freescale (ARM and Power-based MCUs, originally for automotive). Yet Intel can combine scads of its technology—including modems, WiFi (think: Centrino), PCIe, and USB)—into intelligent peripherals for IoT end nodes. Moreover, the company’s software arsenal even beats IBM (I’ll wager) and Intel can apply the x86 code base and tool set to dozens of new products. Or, they could just buy Microchip or Renesas or Cypress.

It pains me to see Intel layoff people, retrench, and appear to fumble around. I actually do think it is shot-gunning things just a bit right now, and officially giving up on developing low-power products for smartphones. Yet they’ll need low power for IoT nodes, too, and I don’t know that Quark and Curie are going to cut it. Still: I have faith. BK is hell-fire-brimstone motivated, and the company is anything but stupid. Time to pick a few paths and stay the course.

From ARM TechCon: Two Companies Proclaim IoT “Firsts” in mbed Zone

UPDATE: Blog updated 14 Dec 2015 to correct typos in ARM nomenclature. C2

Showcased at ARM’s mbed Zone, Silicon Labs and Zebra Technologies show off two IoT “Firsts”.

ARM’s mbed Zone—a huge dedicated section on the ARM TechCon 2015 exhibit floor—is the place where the hottest things for ARM’s new mbed OS are shown. ARM’s mbed is designed to make it easy to securely mesh IoT devices and their data to the cloud. Introduced at TechCon 2014 a year ago, mbed was just a concept; now it’s steps closer to reality.

Watch This!

The wearables market is one of three focus areas for ARM’s development efforts, along with Smart Cities and Smart Home. ARM’s first wearable dev platform is a smart watch worn by ARM IoT Marketing VP Zach Shelby and shown in Figure 1. It’s based on ARM’s wearables reference platform featuring mbed OS integration—with a key feature being power management APIs.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

According to IoT MCU and sensor supplier Silicon Labs, which helped co-develop the APIs with ARM, they “provide a foundation for all peripheral interactions in mbed OS” but are designed with low power in mind and long battery life. No one wants to charge a smart watch during the day: that’s a non-starter. The APIs assure things like minimal polling or interrupts, placing peripherals in deep sleep modes, and basically wringing every power efficiency out of systems designed for long battery life. Mbed OS clearly continues ARM’s focus on low power, but emphasizes IoT ease-of-design.

In the mbed Zone, Silicon Labs was showing off their version of ARM’s smart watch which they call Thunderboard Wear. It’s blown up into demo board size and complete with Silicon Labs’ custom-designed blood pressure and ambient light sensors (Figure 2). The board is based on the company’s ARM Cortex-M3-based EFM32 Giant Gecko SoC.  Silicon Labs’ main ARM TechCon announcement—and the reason they’re in the mbed OS Zone—is that all Gecko MCUs now support mbed OS. We’ll dig into what this means technically in a future post.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

P1060552

“Hello Chris”

Further proving the growing veracity of mbed OS and its ecosystem is the Zebra Technologies “wireless mbed to cloud” demo shown during Atmel’s evening reception at ARM TechCon (they’re also in the mbed Zone). Starting with Atmel’s ATSAMW25-PRO demo board plus display add-on (Figure 4) containing ARM Cortex-M3 and Cortex–M4 Atmel SoCs, Zebra demonstrated communicating directly from a console to the WiFi-equipped demo.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Typing “Hello Chris” into Zebra’s Zatar browser-based software console (Figure 5), the sentence appeared on the tiny display almost immediately. More than a hat trick, the demo shows the promise of the IoT, ARM cores, and the interoperability of mbed OS connected all the way back to the cloud and the Zatar device portal.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Zebra’s Zatar cloud service works with Renesas’s Synergy IoT platform, Freescale’s Kinetis MCU, and of course Atmel’s SoC’s (will Atmel also create their own end-to-end ecosystem?). The  Zebra “IoT Kit” demoed at TechCon is “the first mbed 3.0 Wi-FI kit that offers developers a prototype to quickly test drive IoT,” said Zebra Technologies. If you’re familiar with ARM’s mbed OS connectivity/protocol stack diagram, Zebra uses the COAP protocol to connect devices to the cloud. The company was a COAP co-developer.

The significance of the demo is multifold: quick development time using established Atmel hardware; cloud connectivity using Wi-Fi; an open-standard IoT protocol, and the solution is compliant with ARM’s latest mbed OS 3.0.

The fact that the Zatar console easily connects to multiple vendor’s processors means thousands or tens of thousands of IoT nodes can be quickly controlled, updated, and data queried with minimal effort. In short: creating wireless IoT products and using them just got a whole lot easier.

Zebra will be selling the Zebra ARM mbed IoT Kit for Zatar via distributors and more information is available on their website at www.zatar.com/IoTKit.

 

AMD on a Design Win Roll: GE and Samsung, Recent Examples

AMD is announcing several design wins per week as second-gen APUs show promise.

Note: AMD is a sponsor of this blog.

I follow many companies on Twitter, but lately it’s AMD that’s tweeting the loudest with weekly design wins. The company’s APUs—accelerated processing units—seem to be gaining traction in systems where PC functionality with game-like  graphics is critical. Core to both of these—pun intended!—is the x86 ISA with its PC compatibility and rich software ecosystem.

Here’s a look at two of AMD’s recent design wins, one for an R-Series and the other for the all-in-one G-Series APU.

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung Digital Signs on to AMD

In April Samsung and AMD announced that AMD’s second-gen embedded R-Series APU, previously codenamed “Bald Eagle” is powering Samsung’s latest set-back box (SBB) digital media players. I had no idea what a set-back box is until I looked it up.

Turns out it’s a slim embedded “pizza box” computer 310mm x 219mm x 32mm (12.2in x 8.6in x 1.3in) that’s inserted into the back (“set-back”) of a Samsung Large Format Display (LFD). These industrial-grade LFDs range in size from 32in to 82in and are used in digital signage applications.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

What makes them so compelling is the reason they chose AMD’s R-Series APU. The SBB is a complete networked PC, alleviating the need for a separate box; they’re remotely controlled by Samsung’s MagicInfo software that allows up to 192 displays to be linked with same- or stitched-display information.

That is, one can build a video wall where the image is split across the displays—relying on AMD’s EyeFinity graphics feature—or content can be streamed across networked displays depending upon the retailer’s desired effect. Key to Samsung’s selling differentiation is remote management, RS232 control, and network-based self-diagnostics and active alert notification of problems.

Samsung is using the RX-425BB APU with integrated AMD Radeon R6 GPU. Per the datasheet, this version has a 35W TDP, 4 x86 cores and 6 GPU cores @ 654 MHz, is based on AMD’s latest “Steamroller” 64-bit CPU and Embedded Radeon E8860 discrete GPU. Each R-Series APU can drive four 3D, 4K, or HD displays (up to 4096 x 2160 pixels) while running DirectX 11.1, OpenGL 2.4 and AMD’s Mantle gaming SDK.

As neat as all of this is—it’s a super high-end embedded LAN-party “gaming” PC system, afterall—it’s the support for the latest HSA Foundation specs that makes the R-Series (and companion G-Series SOC) equally compelling for deeply embedded applications.  HSA allows mixed CPU and GPU computation which is especially useful in industrial control with its combination of general purpose, machine control, and display requirements.

GE Chooses AMD SOC for SFF

The second design win for AMD was back in February and it wasn’t broadcast widely: I stumbled across it while working on a sponsored piece for GE Intelligent Platforms (Disclosure: GE-IP is a sponsor of this blog.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

Used in a rugged, COM Express industrial controller, the AMD G-Series SOC met GE’s needs for low power and all-in-one processing, said Tommy Swigart, Global Product Manager at GE Intelligent Platforms. The “Jaguar” core in the SOC can sip as little as 5W TDP, yet still offers 3x PCIe, 2x GigE, 4x serial, plus HD audio and video, 10 USB (including 2x USB 3.0) and 2 SATA interfaces. What a Swiss Army knife of capability it is.

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE’s going all-in with the GE Industrial Internet, the company’s version of the IoT. Since the company is so diversified, GE can wring cost efficiencies for its customers by predicting aircraft maintenance, reducing energy in office HVAC installations, and interconnecting telemetry from locomotives to reduce track traffic and downtime. AMD’s G-Series APU brings computation, graphics, and bundles of I/O in a single-chip SOC—ideal for use in GE’s rugged SFF.

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

 

New HSA Spec Legitimizes AMD’s CPU+GPU Approach

After nearly 3 years since the formation of the Heterogeneous System Architecture (HSA) Foundation, the consortium releases 1.0 version of the Architecture Spec, Programmer’s Reference Manual, Runtime Specification and a Conformance Plan.

Note: This blog is sponsored by AMD.

HSA banner

 

UPDATE 3/17/15: Added Imagination Technologies as one of the HSA founders. C2

No one doubts the wisdom of AMD’s Accelerated Processing Unit (APU) approach that combines x86 CPU with a Radeon graphic GPU. Afterall, one SoC does it all—makes CPU decisions and drives multiple screens, right?

True. Both AMD’s G-Series and the AMD R-Series do all that, and more. But that misses the point.

In laptops this is how one uses the APU, but in embedded applications—like the IoT of the future that’s increasingly relying on high performance embedded computing (HPEC) at the network’s edge—the GPU functions as a coprocessor. CPU + GPGPU (general purpose graphics processor unit) is a powerful combination of decision-making plus parallel/algorithm processing that does local, at-the-node processing, reducing the burden on the cloud. This, according to AMD, is how the IoT will reach tens of billions of units so quickly.

Trouble is, HPEC programming is difficult. Coding the GPU requires a “ninja programmer”, as quipped AMD’s VP of embedded Scott Aylor during his keynote at this year’s Embedded World Conference in Germany. (Video of the keynote is here.) Worse still, capitalizing on the CPU + GPGPU combination requires passing data between the two architectures which don’t share a unified memory architecture. (It’s not that AMD’s APU couldn’t be designed that way; rather, the processors require different memory architectures for maximum performance. In short: they’re different for a reason.)

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD realized this limitation years ago and in 2012 catalyzed the HSA Foundation with several companies including ARM, Texas Instruments, Imagination Technology, MediaTek, Qualcomm, Samsung and others. The goal was to create a set of specifications that define heterogeneous hardware architectures but also create an HPEC programming paradigm for CPU, GPU, DSP and other compute elements. Collectively, the goal was to make designing, programming, and power optimizing easy for heterogeneous SoCs (Figure).

Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

The HSA Foundation’s goals are realized by making the coder’s job easier using tools—such as an HSA version LLVM open source compiler—that integrates multiple cores’ ISAs. (Courtesy: HSA Foundation; all rights reserved.) Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

After three years of work, the HSA Foundation just released their specifications at version 1.0:

  • HSA System Architecture Spec: defines H/W, OS requirements, memory model (important!), signaling paradigm, and fault handling.
  • Programmers Reference Guide: essentially a virtual ISA for parallel computing, defines an output format for HSA language compilers.
  • HSA Runtime Spec: is an application library for running HSA applications; defines INIT, user queues, memory management.

With HSA, the magic really does happen under the hood where the devil’s in the details. For example, the HSA version LLVM open source compiler creates a vendor-agnostic HSA intermediate language (HSAIL) that’s essentially a low-level VM. From there, “finalizers” compile into vendor-specific ISAs such as AMD or Qualcomm Snapdragon. It’s at this point that low-level libraries can be added for specific silicon implementations (such as VSIPL for vector math). This programming model uses vendor-specific tools but allows novice programmers to start in C++ but end up with optimized, performance-oriented, and low-power efficient code for the heterogeneous combination of CPU+GPU or DSP.

There are currently 43 companies involved with HSA, 16 universities, and three working groups (and they’re already working on version 1.1). Look at the participants, think of their market positions, and you’ll see they have a vested interest in making this a success.

In AMD’s case, as the only x86 and ARM + GPU APU supplier to the embedded market, the company sees even bigger successes as more embedded applications leverage heterogeneous parallel processing.

One example where HSA could be leveraged, said Phil Rogers, President of the HSA Foundation, is for multi-party video chatting. An HSA-compliant heterogeneous architecture would allow the processors to work in a single (virtual) memory pool and avoid the multiple data set copies—and processor churn—prevalent in current programming models.

With key industry players supporting HSA including AMD, ARM, Imagination Technologies, Samsung, Qualcomm, MediaTek and others, a lot of x86, ARM, and MIPS-based SoCs are likely to be compliant with the specification. That should kick off a bunch of interesting software development leading to a new wave of high performance applications.

Coke Machine with a Wii Mote? K-Cup Coffee Machine Marries Oculus Rift?

Talk about some kind of an weird Coke-machine-meets-Minority Report mash-up, but the day is here where vending machines will do virtual reality tricks, access your Facebook page, and be as much fun as Grand Theft Auto. And they’ll still dispense stuff.

448px-Custom_Coke_machine_-_operating_screen

By Jeff Moriarty (Custom Coke machine – operating screen) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

According to embedded tech experts ADLINK Technology and Intel, the intelligent vending machine is here. Multiple iPad-like screens will entertain you, suggest which product to buy, feed you social media, and take your money.

These vending machines will be IoT-enabled, connected to the cloud and multiple online databases, and equipped with multiple cameras. Onboard signal processing will respond to 3D gesture control or immerse you in a virtual reality scenario with the product you’re trying to buy. Facial recognition, voice recognition, and even following your eyes as you roam the screens will be the norm.

Facial recognition via Intel's Perceptual Computing SDK. (Courtesy: Intel Corporation.)

Facial recognition via Intel’s Perceptual Computing SDK. (Courtesy: Intel Corporation.)

For you, the customer, the vending machine experience will be fun, entertaining—and very much like a video game. For the retailer, they’re hoping to make more money off of you while using remote IoT monitoring, predictive diagnostics, and “big data” to lower their costs.

The era of the intelligent vending machine is upon us. The machine’s already connected to the Internet…and one of many billions of smart IoT nodes coming to a store near you.

Read more about them here.

Virtual, Immersive, Interactive: Performance Graphics and Processing for IoT Displays

Vending machines outside Walmart

Current-gen machines like these will give way to smart, IoT connected machines with 64-bit graphics and virtual reality-like customer interaction.

Not every IoT node contains a low-performance processor, sensor and slow comms link. Sure, there may be tens of billions of these, but estimates by IHS, Gartner, Cisco still infer the need for billions of smart IoT nodes with hefty processing needs. These intelligent IoT platforms are best left to 64-bit algorithm processors like AMD’s G-and R-Series of Accelerated Processing Units (APU). AMD’s claim to fame is 64-bit cores combined with on-board Radeon graphics processing units (GPU) and tons of I/O.

As an example, consider this year’s smart vending machine. It may dispense espresso or electronic toys, or maybe show the customer wearing virtual custom-fit clothing. Suppose the machine showed you–at that very moment–using or drinking the product in the machine you were just starting at seconds before.

Far fetched? Far from it. It’s real.

These machines require a multi-media, sensor fusion experience. Multiple iPad-like touch screens may present high-def product options while cameras track customers’ eye movements, facial expressions, and body language in three-space.

This “visual compute” platform will tailor the display information to best interact with the customer in an immersive, gesture-sort of experience. Fusing all these inputs, processing the data in real-time, and driving multiple displays is best handled by 64-bit APUs with closely-coupled CPU and GPU execution units, hardware acceleration, and support for standards like DirectX 11, HSA 1.0, OpenGL and OpenCL.

For heavy lifting in visual compute-intensive IoT platforms, keep an eye on AMD’s graphics-ready APUs.

If you are attending Embedded World February 24-26, be sure to check out the keynote Heterogeneous Computing for an Internet of Things World,” by Scott Aylor, Corporate VP and General Manager, AMD Embedded Solutions on Wednesday the 25th at 9:30.

This blog was sponsored by AMD.

What’s the Nucleus of Mentor’s Push into Industrial Automation?

Mentor’s once nearly-orphaned Nucleus RT forms the foundation of a darned impressive software suite for controlling meat packing or nuclear power plants.

GlassesEveryone appreciates an underdog—the pale, wimpy kid with glasses and brown polyester sweater who gets routinely beaten up by the popular boys—but sticks it out day after day and eventually grows up to create a tech start-up everyone loves. (Part of this story is my personal history; I’ll let you guess which part.)

So it is with Mentor’s Nucleus RTOS, which the company announced forms the basis for the recent initiative into Industrial Automation (I.A.). Announced this week at the ARC Industry Forum in Orlando is Mentor’s “Embedded Solution for Industrial Automation” (Figure 1).  A cynic might look at this figure as a collection of existing Mentor products…slightly rearranged to make a compelling argument for a “solution” in the I.A. space.  That skinny kid Nucleus is right there, listed on the diagram. Oh, how many times have I asked Mentor why they keep Nucleus around only to get beaten up by the big RTOS kids!

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure.

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure. 

After all, you’ll recognize Mentor’s Embedded Linux, the Nucleus RTOS I just mentioned, and the company’s Sourcery debug/analyzer/IDE product suite. All of these have been around for a while, although Nucleus is the grown-up kid in this bunch. (Pop quiz: True or False…Did all three of these products came from Mentor acquisitions? Bonus question: From what company(ies)?)

Into this mix, Mentor is adding new security tools from our friends at Icon Labs, plus hooks to a hot new automation GUI/HMI called Qt. (Full disclosure: Icon Labs founder Alan Grau is one of our security bloggers; however, we were taken by surprise at this recent Mentor announcement!)

Industry 4.0: I.A. meets IoT

According to Mentor’s Director of Product Management for Runtime Solutions, Warren Kurisu (whose last name is pronounced just like my first name in Japanese: Ku-ri-su), I.A. is gaining traction, big time. There’s a term for it: “Industry 4.0”. The large industrial automation vendors—like GE, Siemens, Schneider Electric, and others—have long been collecting factory data and feeding it into the enterprise, seeking to reduce costs, increase efficiency, and tie systems into the supply chain. Today, we call this concept the Internet of Things (IoT) and Industry 4.0 is basically the promise of interoperability between currently bespoke (and proprietary) I.A. systems with smart, connected IoT devices plus a layer of cyber security thrown in.

Mentor’s Kurisu points out that what’s changed is not only the kinds of devices that will connect into I.A. systems, but how they’ll connect in more ways than via serial SCADA or FieldBus links. Industrial automation will soon include all the IoT pipes we’re reading about: Wi-Fi, Bluetooth LE, various mesh topologies, Ethernet, cellular—basically whatever works and is secure.

The Skinny Kid Prevails

Herein lies the secret of Mentor’s Industrial Automation Solution. It just so happens the company has most of what you’d need to connect legacy I.A. systems to the IoT, plus add new kinds of smart embedded sensors into the mix. What’s driving the whole market is cost. According to a recent ARC survey, reduced downtime, improved process performance, reduced  machine lifecycle costs—all of these, and more, are leading I.A. customers and vendors to upgrade their factories and systems.

Additionally, says Mentor’s Kurisu, having the ability to consolidate multiple pieces of equipment, reduce power, improve safety, and add more local, operator-friendly graphics are criteria for investing in new equipment, sensors, and systems.

Mentor brings something to the party in each of these areas:

- machine or system convergence, either by improved system performance or reduced footprint

- capabilities and differentiation, allowing I.A. vendors to create systems different from “the other guys”

- faster time-to-money, done through increased productivity, system design and debug, or anything to reduce the I.A. vendor’s and their customer’s efforts.

Graphic - Industrial Automation Flow

Figure 2: Industrial automation a la Mentor. The embedded pieces rely on Nucleus RTOS, or variations thereof. New Qt software for automation GUI’s plus security gateways from Icon Labs bring security and IoT into legacy I.A. installations.

Figure 2 sums up the Mentor value proposition, but notice how most of the non-enterprise blocks in the diagram are built upon the Nucleus RTOS.

Nucleus, for example, has achieved safety certification by TÜV SÜD complete with artifacts (called Nucleus SafetyCert). Mentor’s Embedded Hypervisor—a foundational component of some versions of Nucleus—can be used to create a secure partitioned environment for either multicore or multiple processors (heterogeneous or homogeneous), in which to run multiple operating systems which won’t cross-pollute in the event of a virus or other event.

New to the Mentor offering is an industry-standard Qt GUI running on Linux, or Qt optimized for embedded instantiations running on—wait for it—Nucleus RTOS. Memory and other performance optimizations reduce the footprint, boot faster, and there are versions now for popular IoT processors such as ARM’s Cortex-Mx cores.

Playground Victory: The Take-away

So if the next step in Industrial Automation is Industry 4.0—the rapid build-out of industrial systems reducing cost, adding IoT capabilities with secure interoperability—then Mentor has a pretty compelling offering. That consolidation and emphasis on low power I mentioned above can be had for free via capabilities already build into Nucleus.

For example, embedded systems based on Nucleus can intelligently turn off I/O and displays and even rapidly drive multicore processors into their deepest sleep modes. One example explained to me by Mentor’s Kurisu showed an ARM-based big.LITTLE system that ramped performance when needed but kept the power to a minimum. This is possible, in part, by Mentor’s power-aware drivers for an entire embedded I.A. system under the control of Nucleus.

And  in the happy ending we all hope for, it looks like the maybe-forgotten Nucleus RTOS—so often ignored by editors like me writing glowingly about Wind River’s VxWorks or Green Hill’s INTEGRITY—well, maybe Nucleus has grown up.  It’s the RTOS ready to run the factory of the future. Perhaps your electricity is right now generated under the control of the nerdy little RTOS that made it big.

The Soft(ware) Core of Qualcomm’s Internet of Everything Vision

Qualcomm supplements silicon with multiple software initiatives.

Qualcomm Snapdragon
Update 1: Added attribution to figures.
The numbers are huge: 50B connected devices; 7B smartphones to be sold by 2017; 1000x growth in data traffic within a few years. Underlying all of these devices in the Internet of Things…wait, the Internet of Everything…is Qualcomm. Shipping 700 million chipsets per year on top of a wildly successful IP creation business in cellular modem algorithms, plus being arguably #1 in 3G/4G/LTE with Snapdragon SoCs in smartphones, the company is now setting its sights on M2M connectivity. Qualcomm has perhaps more initiatives in IoT/IoE than any other vendor. Increasingly, those initiatives rely on the software necessary for the global M2M-driven IoT/IoE trend to take root.

Telit Wireless Devcon
Speaking at the Telit Wireless Devcon in San Jose on 15 October, Qualcomm VP Nakul Duggal of the Mobile Computing Division painted a picture showing the many pieces of the company’s strategy for the IoT/E. Besides the aforementioned arsenal of SnapDragon SoC and Gobi modem components, the company is bringing to bear Wi-Fi, Bluetooth, local radio (like NFC), GPS, communications stacks, and a vision for heterogeneous M2M device communication they call “dynamic proximal networking”. Qualcomm supplies myriad chipsets to Telit Wireless, and Telit rolls them into higher order modules upon which Telit’s customers add end-system value.

Over 8 Telit Wireless modules are based upon Qualcomm modems.

Over eight Telit Wireless modules are based upon Qualcomm modems, as presented at the Telit Wireless Devcon 2013.

But it all needs software in order to work. Here are a few of Qualcomm’s software initiatives.

Modem’s ARM and API Open to All
Many M2M nodes–think of a vending machine, or the much maligned connected coffee maker–don’t need a lot of intelligence to function. They collect data, perform limited functions, and send analytics and diagnostics to their remote M2M masters. Qualcomm’s Duggal says that the ARM processors in Qualcomm modems are powerful enough to perform that computational load. There’s no need for an additional CPU so the company is making available Java (including Java ME), Linux and ThreadX to run their 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm has also opened up the modem APIs and made available their IoT Connection Manager software to make it easier to write closer-to-the-metal code for modem. Duggal revealed that Qualcomm has partnered with Digi International in this effort as it applies to telematics market segments.

Leverage Smartphone Graphics
And some of those M2M devices on the IoE may have displays–simple UIs at first (like a vending machine)—but increasingly more complex as the device interacts with the consumer. A restaurant’s digital menu sign, for example, need not run a full blown PC and Windows Embedded operating system when a version of a Snapdragon SoC will do. After all, the 1080p HDMI graphics needs of an HTC One with S600 far outweigh those of a digital sign. Qualcomm’s graphics accelerators and signal processing algorithms can easily apply to display-enabled M2M devices. This applies doubly as more intelligence is pushed to the M2M node, alleviating the need to send reams of data up to the cloud for processing.

Digital 6th Sense: Context
Another area Duggal described as the “Digital 6th Sense” might be thought of as contextual computing. Smartphones or wearable fitness devices like Nike’s new FuelBand SE might react differently when they’re outside, at work, or in the home. More than just counting steps and communicating with an App, if the device knows where it is…including precisely where it is inside of a building…it can perform different functions. Qualcomm now includes the Atheros full RF spectrum of products including Bluetooth, Bluetooth LE, NFC, Wi-Fi and more. Software stacks for all of these enable connectivity, but code that meshes (no pun) Wi-Fi with GPS data provides outside and inside position information. Here, Qualcomm’s software melds myriad infrastructure technologies to provide inside positioning. A partnership with Cisco will bring the technology to consumer locations like shopping malls to coexist with Cisco’s Mobility Services Engine for location-based Apps.

Smart Start at Home
Finally, the smart home is another area ripe for innovation. Connected devices in the home range from the existing set-top box for entertainment, to that connected coffee pot, smart meter, Wi-Fi enabled Next thermostat and smoke/CO detector, home health and more. These disparate ecosystems, says Duggal, are similar only in their “heterogeneousness” in the home. That is: they were never designed to be interconnected. Qualcomm is taking their relationships with every smart meter manufacturer, their home gateway/backhaul designs, and their smartphone expertise, and rolling it into the new AllJoyn software effort.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener. Courtesy: Qualcomm and AllJoyn.org .

AllJoyn is an open source project that seeks to set a “common language for the Internet of Everything”. According to AllJoyn.org, the “dynamic proximal network” is created using a universal software framework that’s extremely lightweight. Qualcomm’s Duggal described the ability for a device to enumerate that it has a sensor, audio, display, or other I/O. Most importantly, Alljoyn is “bearer agnostic” across all leading OSes or connectivity mechanism.

AllJoyn connectivity diagram.

AllJoyn connectivity diagram. Courtesy: www.alljoyn.org .

If Qualcomm is to realize their vision of selling more modems and Snapdragon-like SoCs, making them play well together and exchange information is critical. AllJoyn is pretty new; a new Standard Client (3.4.0) was released on 9 October. It’s unclear to me right now how AllJoyn compares with Wind River’s MQTT-based M2M Intelligent Device Platform or Digi’s iDigi Cloud or Eurotech’s EveryWhere Device Framework.

Qualcomm’s on a Roll
With their leadership in RF modems and smartphone processors, Qualcomm is laser focused on the next big opportunity: the IoT/E. Making all of those M2M nodes actually do something useful will require software throughout the connected network. With so many software initiatives underway, Qualcomm is betting on their next big thing: the Internet of Everything. Software will be the company’s next major “killer app”.