From ARM TechCon: Two Companies Proclaim IoT “Firsts” in mbed Zone

UPDATE: Blog updated 14 Dec 2015 to correct typos in ARM nomenclature. C2

Showcased at ARM’s mbed Zone, Silicon Labs and Zebra Technologies show off two IoT “Firsts”.

ARM’s mbed Zone—a huge dedicated section on the ARM TechCon 2015 exhibit floor—is the place where the hottest things for ARM’s new mbed OS are shown. ARM’s mbed is designed to make it easy to securely mesh IoT devices and their data to the cloud. Introduced at TechCon 2014 a year ago, mbed was just a concept; now it’s steps closer to reality.

Watch This!

The wearables market is one of three focus areas for ARM’s development efforts, along with Smart Cities and Smart Home. ARM’s first wearable dev platform is a smart watch worn by ARM IoT Marketing VP Zach Shelby and shown in Figure 1. It’s based on ARM’s wearables reference platform featuring mbed OS integration—with a key feature being power management APIs.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

According to IoT MCU and sensor supplier Silicon Labs, which helped co-develop the APIs with ARM, they “provide a foundation for all peripheral interactions in mbed OS” but are designed with low power in mind and long battery life. No one wants to charge a smart watch during the day: that’s a non-starter. The APIs assure things like minimal polling or interrupts, placing peripherals in deep sleep modes, and basically wringing every power efficiency out of systems designed for long battery life. Mbed OS clearly continues ARM’s focus on low power, but emphasizes IoT ease-of-design.

In the mbed Zone, Silicon Labs was showing off their version of ARM’s smart watch which they call Thunderboard Wear. It’s blown up into demo board size and complete with Silicon Labs’ custom-designed blood pressure and ambient light sensors (Figure 2). The board is based on the company’s ARM Cortex-M3-based EFM32 Giant Gecko SoC.  Silicon Labs’ main ARM TechCon announcement—and the reason they’re in the mbed OS Zone—is that all Gecko MCUs now support mbed OS. We’ll dig into what this means technically in a future post.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

P1060552

“Hello Chris”

Further proving the growing veracity of mbed OS and its ecosystem is the Zebra Technologies “wireless mbed to cloud” demo shown during Atmel’s evening reception at ARM TechCon (they’re also in the mbed Zone). Starting with Atmel’s ATSAMW25-PRO demo board plus display add-on (Figure 4) containing ARM Cortex-M3 and Cortex–M4 Atmel SoCs, Zebra demonstrated communicating directly from a console to the WiFi-equipped demo.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Typing “Hello Chris” into Zebra’s Zatar browser-based software console (Figure 5), the sentence appeared on the tiny display almost immediately. More than a hat trick, the demo shows the promise of the IoT, ARM cores, and the interoperability of mbed OS connected all the way back to the cloud and the Zatar device portal.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Zebra’s Zatar cloud service works with Renesas’s Synergy IoT platform, Freescale’s Kinetis MCU, and of course Atmel’s SoC’s (will Atmel also create their own end-to-end ecosystem?). The  Zebra “IoT Kit” demoed at TechCon is “the first mbed 3.0 Wi-FI kit that offers developers a prototype to quickly test drive IoT,” said Zebra Technologies. If you’re familiar with ARM’s mbed OS connectivity/protocol stack diagram, Zebra uses the COAP protocol to connect devices to the cloud. The company was a COAP co-developer.

The significance of the demo is multifold: quick development time using established Atmel hardware; cloud connectivity using Wi-Fi; an open-standard IoT protocol, and the solution is compliant with ARM’s latest mbed OS 3.0.

The fact that the Zatar console easily connects to multiple vendor’s processors means thousands or tens of thousands of IoT nodes can be quickly controlled, updated, and data queried with minimal effort. In short: creating wireless IoT products and using them just got a whole lot easier.

Zebra will be selling the Zebra ARM mbed IoT Kit for Zatar via distributors and more information is available on their website at www.zatar.com/IoTKit.

 

New HSA Spec Legitimizes AMD’s CPU+GPU Approach

After nearly 3 years since the formation of the Heterogeneous System Architecture (HSA) Foundation, the consortium releases 1.0 version of the Architecture Spec, Programmer’s Reference Manual, Runtime Specification and a Conformance Plan.

Note: This blog is sponsored by AMD.

HSA banner

 

UPDATE 3/17/15: Added Imagination Technologies as one of the HSA founders. C2

No one doubts the wisdom of AMD’s Accelerated Processing Unit (APU) approach that combines x86 CPU with a Radeon graphic GPU. Afterall, one SoC does it all—makes CPU decisions and drives multiple screens, right?

True. Both AMD’s G-Series and the AMD R-Series do all that, and more. But that misses the point.

In laptops this is how one uses the APU, but in embedded applications—like the IoT of the future that’s increasingly relying on high performance embedded computing (HPEC) at the network’s edge—the GPU functions as a coprocessor. CPU + GPGPU (general purpose graphics processor unit) is a powerful combination of decision-making plus parallel/algorithm processing that does local, at-the-node processing, reducing the burden on the cloud. This, according to AMD, is how the IoT will reach tens of billions of units so quickly.

Trouble is, HPEC programming is difficult. Coding the GPU requires a “ninja programmer”, as quipped AMD’s VP of embedded Scott Aylor during his keynote at this year’s Embedded World Conference in Germany. (Video of the keynote is here.) Worse still, capitalizing on the CPU + GPGPU combination requires passing data between the two architectures which don’t share a unified memory architecture. (It’s not that AMD’s APU couldn’t be designed that way; rather, the processors require different memory architectures for maximum performance. In short: they’re different for a reason.)

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD realized this limitation years ago and in 2012 catalyzed the HSA Foundation with several companies including ARM, Texas Instruments, Imagination Technology, MediaTek, Qualcomm, Samsung and others. The goal was to create a set of specifications that define heterogeneous hardware architectures but also create an HPEC programming paradigm for CPU, GPU, DSP and other compute elements. Collectively, the goal was to make designing, programming, and power optimizing easy for heterogeneous SoCs (Figure).

Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

The HSA Foundation’s goals are realized by making the coder’s job easier using tools—such as an HSA version LLVM open source compiler—that integrates multiple cores’ ISAs. (Courtesy: HSA Foundation; all rights reserved.) Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

After three years of work, the HSA Foundation just released their specifications at version 1.0:

  • HSA System Architecture Spec: defines H/W, OS requirements, memory model (important!), signaling paradigm, and fault handling.
  • Programmers Reference Guide: essentially a virtual ISA for parallel computing, defines an output format for HSA language compilers.
  • HSA Runtime Spec: is an application library for running HSA applications; defines INIT, user queues, memory management.

With HSA, the magic really does happen under the hood where the devil’s in the details. For example, the HSA version LLVM open source compiler creates a vendor-agnostic HSA intermediate language (HSAIL) that’s essentially a low-level VM. From there, “finalizers” compile into vendor-specific ISAs such as AMD or Qualcomm Snapdragon. It’s at this point that low-level libraries can be added for specific silicon implementations (such as VSIPL for vector math). This programming model uses vendor-specific tools but allows novice programmers to start in C++ but end up with optimized, performance-oriented, and low-power efficient code for the heterogeneous combination of CPU+GPU or DSP.

There are currently 43 companies involved with HSA, 16 universities, and three working groups (and they’re already working on version 1.1). Look at the participants, think of their market positions, and you’ll see they have a vested interest in making this a success.

In AMD’s case, as the only x86 and ARM + GPU APU supplier to the embedded market, the company sees even bigger successes as more embedded applications leverage heterogeneous parallel processing.

One example where HSA could be leveraged, said Phil Rogers, President of the HSA Foundation, is for multi-party video chatting. An HSA-compliant heterogeneous architecture would allow the processors to work in a single (virtual) memory pool and avoid the multiple data set copies—and processor churn—prevalent in current programming models.

With key industry players supporting HSA including AMD, ARM, Imagination Technologies, Samsung, Qualcomm, MediaTek and others, a lot of x86, ARM, and MIPS-based SoCs are likely to be compliant with the specification. That should kick off a bunch of interesting software development leading to a new wave of high performance applications.

Coke Machine with a Wii Mote? K-Cup Coffee Machine Marries Oculus Rift?

Talk about some kind of an weird Coke-machine-meets-Minority Report mash-up, but the day is here where vending machines will do virtual reality tricks, access your Facebook page, and be as much fun as Grand Theft Auto. And they’ll still dispense stuff.

448px-Custom_Coke_machine_-_operating_screen

By Jeff Moriarty (Custom Coke machine – operating screen) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

According to embedded tech experts ADLINK Technology and Intel, the intelligent vending machine is here. Multiple iPad-like screens will entertain you, suggest which product to buy, feed you social media, and take your money.

These vending machines will be IoT-enabled, connected to the cloud and multiple online databases, and equipped with multiple cameras. Onboard signal processing will respond to 3D gesture control or immerse you in a virtual reality scenario with the product you’re trying to buy. Facial recognition, voice recognition, and even following your eyes as you roam the screens will be the norm.

Facial recognition via Intel's Perceptual Computing SDK. (Courtesy: Intel Corporation.)

Facial recognition via Intel’s Perceptual Computing SDK. (Courtesy: Intel Corporation.)

For you, the customer, the vending machine experience will be fun, entertaining—and very much like a video game. For the retailer, they’re hoping to make more money off of you while using remote IoT monitoring, predictive diagnostics, and “big data” to lower their costs.

The era of the intelligent vending machine is upon us. The machine’s already connected to the Internet…and one of many billions of smart IoT nodes coming to a store near you.

Read more about them here.

Virtual, Immersive, Interactive: Performance Graphics and Processing for IoT Displays

Vending machines outside Walmart

Current-gen machines like these will give way to smart, IoT connected machines with 64-bit graphics and virtual reality-like customer interaction.

Not every IoT node contains a low-performance processor, sensor and slow comms link. Sure, there may be tens of billions of these, but estimates by IHS, Gartner, Cisco still infer the need for billions of smart IoT nodes with hefty processing needs. These intelligent IoT platforms are best left to 64-bit algorithm processors like AMD’s G-and R-Series of Accelerated Processing Units (APU). AMD’s claim to fame is 64-bit cores combined with on-board Radeon graphics processing units (GPU) and tons of I/O.

As an example, consider this year’s smart vending machine. It may dispense espresso or electronic toys, or maybe show the customer wearing virtual custom-fit clothing. Suppose the machine showed you–at that very moment–using or drinking the product in the machine you were just starting at seconds before.

Far fetched? Far from it. It’s real.

These machines require a multi-media, sensor fusion experience. Multiple iPad-like touch screens may present high-def product options while cameras track customers’ eye movements, facial expressions, and body language in three-space.

This “visual compute” platform will tailor the display information to best interact with the customer in an immersive, gesture-sort of experience. Fusing all these inputs, processing the data in real-time, and driving multiple displays is best handled by 64-bit APUs with closely-coupled CPU and GPU execution units, hardware acceleration, and support for standards like DirectX 11, HSA 1.0, OpenGL and OpenCL.

For heavy lifting in visual compute-intensive IoT platforms, keep an eye on AMD’s graphics-ready APUs.

If you are attending Embedded World February 24-26, be sure to check out the keynote Heterogeneous Computing for an Internet of Things World,” by Scott Aylor, Corporate VP and General Manager, AMD Embedded Solutions on Wednesday the 25th at 9:30.

This blog was sponsored by AMD.