AMD’s “Beefy” APUs Bulk Up Thin Clients for HP, Samsung

There are times when a tablet is too light, and a full desktop too much. The answer? A thin client PC powered by an AMD APU.

Note: this blog is sponsored by AMD.

A desire to remotely access my Mac and Windows machines from somewhere else got me thinking about thin client architectures. A thin “client” machine has sufficient processing for local storage and display—plus keyboard, mouse and other I/O—and is remotely connected to a more beefy “host” elsewhere. The host may be in the cloud or merely somewhere else on a LAN, sometimes intentionally inaccessible for security reasons.

Thin client architectures—or just “thin clients”—find utility in call centers, kiosks, hospitals, “smart” monitors and TVs, military command posts and other multi-user, virtualized installations. At times they’ve been characterized as low performance or limited in functionality, but that’s changing quickly.

They’re getting additional processing and graphics capability thanks to AMD’s G-Series and A-Series Accelerated Processing Units (APUs). By some analysts, AMD is number one in thin clients and the company keeps winning designs with its highly integrated x86 plus Radeon graphics SoCs: most recently with HP and Samsung.

HP’s t420 and mt245 Thin Clients

HP’s ENERGY STAR certified t420 is a fanless thin client for call centers, Desktop-as-a-service and remote kiosk environments (Figure 1). Intended to mount on the back of a monitor such as the company’s ProDisplays (like you see at the doctor’s office), the unit runs HP’s ThinPro 32 or Smart Zero Core 32 operating system, has either 802.11n Wi-Fi or Gigabit Ethernet, 8 GB of Flash and 2 GB of DDR3L SDRAM.

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

USB ports for keyboard and mouse supplement the t420’s dual display capability (DVI-D  and VGA)—made possible by AMD’s dual-core GX-209JA running at 1 GHz.

Says AMD’s Scott Aylor, corporate vice president and general manager, AMD Embedded Solutions: “The AMD Embedded G-Series SoC couples high performance compute and graphics capability in a highly integrated low power design. We are excited to see innovative solutions like the HP t420 leverage our unique technologies to serve a broad range of markets which require the security, reliability and low total cost of ownership offered by thin clients.”

The whole HP thin client consumes a mere 45W and according to StorageReview.com, will retail for $239.

Along the lines of a lightweight mobile experience, HP has also chosen AMD for their mt245 Mobile Thin Client (Figure 2). The thin client “cloud computer” resembles a 14-inch (1366 x 768 resolution) laptop with up to 4GB of SDRAM and a 16 GB SSD, the unit runs Windows Embedded Standard 7P 64 on AMD’s quad core A6-6310 APU with Radeon R4 GPU. There are three USB ports, 1 VGA and 1 HDMI, plus Ethernet and optional Wi-Fi.

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Like the t420, the mt245 consumes a mere 45W and is intended for employee mobility but is configured for a thin client environment. AMD’s director of thin client product management, Stephen Turnbull says the mt245 targets “a whole range of markets, including education and healthcare.”

At the core of this machine, pun intended, is the Radeon GPU that provides heavy-lifting graphics performance. The mt245 can not only take advantage of virtualized cloud computing, but has local moxie to perform graphics-intensive applications like 3D rendering. Healthcare workers might, for example, examine ultrasound images. Factory technicians could pull up assembly drawings, then rotate them in CAD-like software applications.

Samsung Cloud Displays

An important part of Samsung’s displays business involves “smart” displays, monitors and televisions. Connected to the cloud or operating autonomously as a panel PC, many Samsung displays need local processing such as that provided by AMD’s APUs.

Samsung’s recently announced (June 17, 2015) 21.5-inch TC222W and 23.6-inch TC242W also use AMD G-Series devices in thin client architectures. The dual core 2.2 GHz GX222 with Radeon HD6290 powers both displays at 1920 x 1080 (HD) and provides six USB ports, Ethernet, and runs Windows Embedded 7 out of 4GB of RAM and 32 GB of SSD.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Said Seog-Gi Kim, senior vice president, Visual Display Business, Samsung Electronics, “Samsung’s powerful Windows Thin Client Cloud displays combine professional, ergonomic design with advanced thin-client technology.” The displays rely on the company’s Virtual Desktop Infrastructure (VDI) through a centrally managed data center that increases data security and control (Figure 3). Applications include education, business, healthcare, hospitality or any environment that requires virtualized security with excellent local processing and graphics.

Key to the design wins is the performance density of the G-Series APUs, coupled with legacy x86 software interoperability. The APUs–for both HP and Samsung–add more beef to thin clients.

 

Proprietary “Standards” Feel Way Too Controlling

In the embedded industry, we’re surrounded by standards. Most are open, some are not. Open is better.

I recently got torqued at getting held over the barrel once again for a proprietary embedded doodad–an Apple Lightning cable. All my old Apple cables are now useless, and Apple doesn’t use the very-open (and cheap!) micro-USB cable that’s standard everywhere.

Some common cables: only the Apple Lightning cable on the left is proprietary. (Courtesy: Wikimedia Commons, by Algr).

Some common cables: only the Apple Lightning cable on the left is proprietary. (Courtesy: Wikimedia Commons, by Algr).

Embedded boards also come with myriad open and not-so-open standards. If it’s not a standard managed by a consortium–say, PCIe-104 OneBank by the PC/104 Consortium, or SMARC by SGET–designers should carefully weigh the pro’s and con’s of choosing that vendor’s “standard”. Often, it makes sense. But beware of the commitment you’re making.

This is one reason that Linux, the world’s popular open source software, became so prevalent: devs and their employers could control their on destiny.

Check out my full rant here: http://bit.ly/1U4PYcS

 

Move Over Arduino, AMD and GizmoSphere Have a “Jump” On You with Graphics

The UK’s National Videogame Arcade relies on CPU, graphics, I/O and openness to power interactive exhibits.

Editor’s note: This blog is sponsored by AMD.

When I was a kid I was constantly fascinated with how things worked. What happens when I stick this screwdriver in the wall socket? (Really.) How come the dinner plate falls down and not up?

Humans have to try things for ourselves in order to fully understand them; this sparks our creativity and for many of us becomes a life calling.

Attempting to catalyze visitors’ curiosity, the UK’s National Videogame Arcade (NVA) opened in March 2015 with the sole intention of getting children and adults interested in videogames through the use of interactive exhibits, most of which are hands-on. The hope is that young people will first be stimulated by the games, and secondly that they someday unleash their creativity on the videogame and tech industries.

The UK's National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

The UK’s National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

 Might As Well “Jump!”

The NVA is located in a corner building with lots of curbside windows—imagine a fancy New York City department store but without the mannequins in the street-side windows. Spread across five floors and a total of 33,000 square feet, the place is a cooperative effort between GameCity (a nice bunch of gamers), the Nottingham City Council, and local Nottingham Trent University.

The goal of pulling in 60,000 visitors a year is partly achieved by the NVA’s signature exhibit “Jump!” that allows visitors to experience gravity (without the plate) and how it affects videogame characters like those in Donkey Kong or Angry Birds. Visitors actually get to jump on the Jump-o-tron, a physics-based sensor that’s controlled by GizmoSphere’s Gizmo 2 development board.

The Jumpotron uses AMD's G-Series SoC combining an x86 and Radeon GPU.

The Jumpotron uses AMD’s G-Series SoC combining an x86 and Radeon GPU.

The heart of Gizmo 2 is AMD’s G-Series APU, combining a 64-bit x86 CPU and Radeon graphics processor. Gizmo 2 is the latest creation from the GizmoSphere nonprofit open source community which seeks to “bring the power of a supercomputer and the I/O capabilities of a microcontroller to the x86 open source community,” according to www.gizmosphere.org.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

Jump!” allows visitors to experience—and tweak—gravity while examining the effect upon on-screen characters. The combination requires extensive processing—up to 85 GFLOPS worth—plus video manipulation and display. What’s amazing is that “Jump!”, along with many other NVA exhibits, isn’t powered by rackmount servers but rather by the tiny 4 x 4 inch Gizmo 2 that supports Direct X 11.1, OpenGL 4.2x, and OpenCL 1.2. It also runs Windows and Linux.

AMD’s “G” Powers Gizmo 2

Gizmo 2 is a dense little package, sporting HDMI, Ethernet, PCIe, USB (2.0 and 3.0), plus myriad other A/V and I/O such as A/D/A—all of them essential for NVA exhibits like “Jump!” Says Ian Simons of the NVA, “Gizmo 2 is used in many of our games…and there are plans for even more games embedded into the building,” including furniture and even street-facing window displays.

Gizmo 2’s small size and support for open source software and hardware—plus the ability to develop on the gamer’s Unity engine—makes Gizmo 2 the preferred choice. Yet the market contains ample platforms from which to choose. Arduino comes to mind.

Gizmo 2's schematic.

Gizmo 2′s schematic. The x86 G-Series SoC is loaded with I/O.

Compared to Arduino, the AMD G Series SoC (GX-210HA) powering Gizmo 2 is orders of magnitude more powerful, plus it’s x86 based and running at 1.0GHz (the integral GPU runs at 300 MHz). This makes the world’s cache of Intel-oriented, Windows-based software and drivers available to Gizmo 2—including some server-side programs. “NVA can create projects with Gizmo 2, including 3D graphics and full motion video, with plenty of horsepower,” says Simons. He’s referring to some big projects already installed at the NVA, plus others in the planning stages.

“One of things we’d like to do,” Simons says, “is continue to integrate Gizmo 2 into more of the building to create additional interactive exhibits and displays.” The small size of Gizmo 2, plus the wickedly awesome performance/graphics rendering/size/Watt of the AMD G-Series APU, allows Gizmo 2 to be embedded all over the building.

See Me, Feel Me

With a nod to The Who’s (1) rock opera Tommy, the NVA building will soon have more Gizmo 2 modules wired into the infrastructure, mixing images and sound. There are at least three projects in the concept stage:

  • DMX addressable logic in the central stairway.  With exposed cables and beams, visitors would be able to control the audio, video, and possibly LED lighting of the stairwell area using a series of switches. The author wonders if voice or other tactile feedback would create all manner of immersive “psychedelic” A/V in the stairwell central hall.
  • Controllable audio zones in the rooftop garden. The NVA’s Yamaha-based sound system already includes 40 zones. Adding AMD G-Series horsepower to these zones would allow visitors to create individually customized light/sound shows, possibly around botanical themes. Has there ever been a Little Shop of Horrors videogame where the plants eat the gardener? I wonder.
  • Sidewalk animation that uses all those street-facing windows to animate the building, possibly changing the building’s façade (Star Trek cloak, anyone?) or even individually controlling games inside the building from outside (or presenting inside activities to the outside). Either way, all those windows, future LCDs, and reams of I/O will require lots more Gizmo 2 embedded boards.

The Gizmo 2 costs $199 and is available from several retailers such as Element14. With Gerber schematics and all the board-focused software open source, it’s no wonder this x86 embedded board is attractive to gamers. With AMD’s G-Series APU onboard, the all-in-one HDK/SDK is an ideal choice for embedded designs—and those future gamers playing with the Gizmo 2 at the UK’s NVA.

BTW: The Who harkened from London, not Nottingham.

AMD on a Design Win Roll: GE and Samsung, Recent Examples

AMD is announcing several design wins per week as second-gen APUs show promise.

Note: AMD is a sponsor of this blog.

I follow many companies on Twitter, but lately it’s AMD that’s tweeting the loudest with weekly design wins. The company’s APUs—accelerated processing units—seem to be gaining traction in systems where PC functionality with game-like  graphics is critical. Core to both of these—pun intended!—is the x86 ISA with its PC compatibility and rich software ecosystem.

Here’s a look at two of AMD’s recent design wins, one for an R-Series and the other for the all-in-one G-Series APU.

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung Digital Signs on to AMD

In April Samsung and AMD announced that AMD’s second-gen embedded R-Series APU, previously codenamed “Bald Eagle” is powering Samsung’s latest set-back box (SBB) digital media players. I had no idea what a set-back box is until I looked it up.

Turns out it’s a slim embedded “pizza box” computer 310mm x 219mm x 32mm (12.2in x 8.6in x 1.3in) that’s inserted into the back (“set-back”) of a Samsung Large Format Display (LFD). These industrial-grade LFDs range in size from 32in to 82in and are used in digital signage applications.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

What makes them so compelling is the reason they chose AMD’s R-Series APU. The SBB is a complete networked PC, alleviating the need for a separate box; they’re remotely controlled by Samsung’s MagicInfo software that allows up to 192 displays to be linked with same- or stitched-display information.

That is, one can build a video wall where the image is split across the displays—relying on AMD’s EyeFinity graphics feature—or content can be streamed across networked displays depending upon the retailer’s desired effect. Key to Samsung’s selling differentiation is remote management, RS232 control, and network-based self-diagnostics and active alert notification of problems.

Samsung is using the RX-425BB APU with integrated AMD Radeon R6 GPU. Per the datasheet, this version has a 35W TDP, 4 x86 cores and 6 GPU cores @ 654 MHz, is based on AMD’s latest “Steamroller” 64-bit CPU and Embedded Radeon E8860 discrete GPU. Each R-Series APU can drive four 3D, 4K, or HD displays (up to 4096 x 2160 pixels) while running DirectX 11.1, OpenGL 2.4 and AMD’s Mantle gaming SDK.

As neat as all of this is—it’s a super high-end embedded LAN-party “gaming” PC system, afterall—it’s the support for the latest HSA Foundation specs that makes the R-Series (and companion G-Series SOC) equally compelling for deeply embedded applications.  HSA allows mixed CPU and GPU computation which is especially useful in industrial control with its combination of general purpose, machine control, and display requirements.

GE Chooses AMD SOC for SFF

The second design win for AMD was back in February and it wasn’t broadcast widely: I stumbled across it while working on a sponsored piece for GE Intelligent Platforms (Disclosure: GE-IP is a sponsor of this blog.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

Used in a rugged, COM Express industrial controller, the AMD G-Series SOC met GE’s needs for low power and all-in-one processing, said Tommy Swigart, Global Product Manager at GE Intelligent Platforms. The “Jaguar” core in the SOC can sip as little as 5W TDP, yet still offers 3x PCIe, 2x GigE, 4x serial, plus HD audio and video, 10 USB (including 2x USB 3.0) and 2 SATA interfaces. What a Swiss Army knife of capability it is.

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE’s going all-in with the GE Industrial Internet, the company’s version of the IoT. Since the company is so diversified, GE can wring cost efficiencies for its customers by predicting aircraft maintenance, reducing energy in office HVAC installations, and interconnecting telemetry from locomotives to reduce track traffic and downtime. AMD’s G-Series APU brings computation, graphics, and bundles of I/O in a single-chip SOC—ideal for use in GE’s rugged SFF.

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

 

CES Turns VPX Upside Down Using COM

Instead of putting I/O on a mezzanine, the processor is on the mezzanine and VPX is the I/O baseboard.

[ UPDATE: 19:00 hr 24 Apr 2015. Changed the interviewee's name to Wayne McGee, not Wayne Fisher. These gentlemen know each other, and Mr. McGee thankfully was polite about my misnomer. A thousand pardons! Also clarified that the ROCK-3x was previously announced. C. Ciufo ]

The computer-on-module (COM) approach puts the seldom-changing I/O on the base card and mounts the processor on a mezzanine board. The thinking is that processors change every few years (faster, more memory, from Intel to AMD to ARM, for example) but a system’s I/O remains stable for the life of the platform.

COM is common (no pun) in PICMG standards like COM Express, SGET standards like Q7 or SMARC, and PC/104 Consortium standards like PC/104 and EBX.

But to my knowledge, the COM concept has never been applied to VME or VPX. With these, the I/O is on the mezzanine “daughter board” while the CPU subsystem is on the base “mother board”.Pull quote

Until now.

Creative Electronic Solutions—CES—has plans to extend its product line into more 3U OpenVPX I/O carrier boards onto which are added “processor XMC” mezzanines. An example is the newer AVIO-2353 with VPX PCIe bus—meaning it plugs into a 3U VPX chassis and acts as a regular VPX I/O LRU.  By itself, it has MIL-STD-1553, ARINC-429, RS232/422/485, GPIO, and other avionics-grade goodies.

The CES ROCK-3210 VNX small form factor avionics chassis.

The CES ROCK-3210 VNX small form factor avionics chassis.

But there’s an XMC site for adding the processor, such as the company’s MFCC-8557 XMC board that uses a Freescale P3041 quad-core Power Architecture CPU. If you’re following this argument, the 3U VPX baseboard has all the I/O, while the XMC mezzanine holds the system CPU. This is a traditional COM stack, but it’s unusual to find it within the VME/VPX ecosystem.

“This is all part of CES’s focus on SWAP, high-rel, and safety-critical ground-up design,” said Wayne McGee, head of CES North America. The company is in the midst of rebranding itself and the shiny new website found at www.ces-swap.com makes their intentions known.

CES has been around since 1981 and serves high-rel platforms like the super-collider at CERN, the Predator UAV, and various Airbus airframes. The emphasis has been on mission- and safety-critical LRUs and systems “Designed for Safety” to achieve DAL-C under DO-178B/C and DO-254.

“We’ll be announcing three new products at AUVSI this year,” McGee told me, “and you can expect to see more COM-style VPX/XMC combinations with some of the latest processors.” Also to be announced will be extensions to the company’s complete VNX small form factor (SFF) chassis systems, such as a new version of the rugged open computer kit (ROCK-3x)—previously announced in February at Embedded World.

CES is new to me, and it’s great to see some different-from-the-pack innovation from an old-school company that clearly has new-school ideas. We’ll be watching closely for more ROCK and COM announcements, but still targeting small, deployable safety-certifiable systems.

New HSA Spec Legitimizes AMD’s CPU+GPU Approach

After nearly 3 years since the formation of the Heterogeneous System Architecture (HSA) Foundation, the consortium releases 1.0 version of the Architecture Spec, Programmer’s Reference Manual, Runtime Specification and a Conformance Plan.

Note: This blog is sponsored by AMD.

HSA banner

 

UPDATE 3/17/15: Added Imagination Technologies as one of the HSA founders. C2

No one doubts the wisdom of AMD’s Accelerated Processing Unit (APU) approach that combines x86 CPU with a Radeon graphic GPU. Afterall, one SoC does it all—makes CPU decisions and drives multiple screens, right?

True. Both AMD’s G-Series and the AMD R-Series do all that, and more. But that misses the point.

In laptops this is how one uses the APU, but in embedded applications—like the IoT of the future that’s increasingly relying on high performance embedded computing (HPEC) at the network’s edge—the GPU functions as a coprocessor. CPU + GPGPU (general purpose graphics processor unit) is a powerful combination of decision-making plus parallel/algorithm processing that does local, at-the-node processing, reducing the burden on the cloud. This, according to AMD, is how the IoT will reach tens of billions of units so quickly.

Trouble is, HPEC programming is difficult. Coding the GPU requires a “ninja programmer”, as quipped AMD’s VP of embedded Scott Aylor during his keynote at this year’s Embedded World Conference in Germany. (Video of the keynote is here.) Worse still, capitalizing on the CPU + GPGPU combination requires passing data between the two architectures which don’t share a unified memory architecture. (It’s not that AMD’s APU couldn’t be designed that way; rather, the processors require different memory architectures for maximum performance. In short: they’re different for a reason.)

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD realized this limitation years ago and in 2012 catalyzed the HSA Foundation with several companies including ARM, Texas Instruments, Imagination Technology, MediaTek, Qualcomm, Samsung and others. The goal was to create a set of specifications that define heterogeneous hardware architectures but also create an HPEC programming paradigm for CPU, GPU, DSP and other compute elements. Collectively, the goal was to make designing, programming, and power optimizing easy for heterogeneous SoCs (Figure).

Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

The HSA Foundation’s goals are realized by making the coder’s job easier using tools—such as an HSA version LLVM open source compiler—that integrates multiple cores’ ISAs. (Courtesy: HSA Foundation; all rights reserved.) Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

After three years of work, the HSA Foundation just released their specifications at version 1.0:

  • HSA System Architecture Spec: defines H/W, OS requirements, memory model (important!), signaling paradigm, and fault handling.
  • Programmers Reference Guide: essentially a virtual ISA for parallel computing, defines an output format for HSA language compilers.
  • HSA Runtime Spec: is an application library for running HSA applications; defines INIT, user queues, memory management.

With HSA, the magic really does happen under the hood where the devil’s in the details. For example, the HSA version LLVM open source compiler creates a vendor-agnostic HSA intermediate language (HSAIL) that’s essentially a low-level VM. From there, “finalizers” compile into vendor-specific ISAs such as AMD or Qualcomm Snapdragon. It’s at this point that low-level libraries can be added for specific silicon implementations (such as VSIPL for vector math). This programming model uses vendor-specific tools but allows novice programmers to start in C++ but end up with optimized, performance-oriented, and low-power efficient code for the heterogeneous combination of CPU+GPU or DSP.

There are currently 43 companies involved with HSA, 16 universities, and three working groups (and they’re already working on version 1.1). Look at the participants, think of their market positions, and you’ll see they have a vested interest in making this a success.

In AMD’s case, as the only x86 and ARM + GPU APU supplier to the embedded market, the company sees even bigger successes as more embedded applications leverage heterogeneous parallel processing.

One example where HSA could be leveraged, said Phil Rogers, President of the HSA Foundation, is for multi-party video chatting. An HSA-compliant heterogeneous architecture would allow the processors to work in a single (virtual) memory pool and avoid the multiple data set copies—and processor churn—prevalent in current programming models.

With key industry players supporting HSA including AMD, ARM, Imagination Technologies, Samsung, Qualcomm, MediaTek and others, a lot of x86, ARM, and MIPS-based SoCs are likely to be compliant with the specification. That should kick off a bunch of interesting software development leading to a new wave of high performance applications.

Coke Machine with a Wii Mote? K-Cup Coffee Machine Marries Oculus Rift?

Talk about some kind of an weird Coke-machine-meets-Minority Report mash-up, but the day is here where vending machines will do virtual reality tricks, access your Facebook page, and be as much fun as Grand Theft Auto. And they’ll still dispense stuff.

448px-Custom_Coke_machine_-_operating_screen

By Jeff Moriarty (Custom Coke machine – operating screen) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

According to embedded tech experts ADLINK Technology and Intel, the intelligent vending machine is here. Multiple iPad-like screens will entertain you, suggest which product to buy, feed you social media, and take your money.

These vending machines will be IoT-enabled, connected to the cloud and multiple online databases, and equipped with multiple cameras. Onboard signal processing will respond to 3D gesture control or immerse you in a virtual reality scenario with the product you’re trying to buy. Facial recognition, voice recognition, and even following your eyes as you roam the screens will be the norm.

Facial recognition via Intel's Perceptual Computing SDK. (Courtesy: Intel Corporation.)

Facial recognition via Intel’s Perceptual Computing SDK. (Courtesy: Intel Corporation.)

For you, the customer, the vending machine experience will be fun, entertaining—and very much like a video game. For the retailer, they’re hoping to make more money off of you while using remote IoT monitoring, predictive diagnostics, and “big data” to lower their costs.

The era of the intelligent vending machine is upon us. The machine’s already connected to the Internet…and one of many billions of smart IoT nodes coming to a store near you.

Read more about them here.

Virtual, Immersive, Interactive: Performance Graphics and Processing for IoT Displays

Vending machines outside Walmart

Current-gen machines like these will give way to smart, IoT connected machines with 64-bit graphics and virtual reality-like customer interaction.

Not every IoT node contains a low-performance processor, sensor and slow comms link. Sure, there may be tens of billions of these, but estimates by IHS, Gartner, Cisco still infer the need for billions of smart IoT nodes with hefty processing needs. These intelligent IoT platforms are best left to 64-bit algorithm processors like AMD’s G-and R-Series of Accelerated Processing Units (APU). AMD’s claim to fame is 64-bit cores combined with on-board Radeon graphics processing units (GPU) and tons of I/O.

As an example, consider this year’s smart vending machine. It may dispense espresso or electronic toys, or maybe show the customer wearing virtual custom-fit clothing. Suppose the machine showed you–at that very moment–using or drinking the product in the machine you were just starting at seconds before.

Far fetched? Far from it. It’s real.

These machines require a multi-media, sensor fusion experience. Multiple iPad-like touch screens may present high-def product options while cameras track customers’ eye movements, facial expressions, and body language in three-space.

This “visual compute” platform will tailor the display information to best interact with the customer in an immersive, gesture-sort of experience. Fusing all these inputs, processing the data in real-time, and driving multiple displays is best handled by 64-bit APUs with closely-coupled CPU and GPU execution units, hardware acceleration, and support for standards like DirectX 11, HSA 1.0, OpenGL and OpenCL.

For heavy lifting in visual compute-intensive IoT platforms, keep an eye on AMD’s graphics-ready APUs.

If you are attending Embedded World February 24-26, be sure to check out the keynote Heterogeneous Computing for an Internet of Things World,” by Scott Aylor, Corporate VP and General Manager, AMD Embedded Solutions on Wednesday the 25th at 9:30.

This blog was sponsored by AMD.

What’s the Nucleus of Mentor’s Push into Industrial Automation?

Mentor’s once nearly-orphaned Nucleus RT forms the foundation of a darned impressive software suite for controlling meat packing or nuclear power plants.

GlassesEveryone appreciates an underdog—the pale, wimpy kid with glasses and brown polyester sweater who gets routinely beaten up by the popular boys—but sticks it out day after day and eventually grows up to create a tech start-up everyone loves. (Part of this story is my personal history; I’ll let you guess which part.)

So it is with Mentor’s Nucleus RTOS, which the company announced forms the basis for the recent initiative into Industrial Automation (I.A.). Announced this week at the ARC Industry Forum in Orlando is Mentor’s “Embedded Solution for Industrial Automation” (Figure 1).  A cynic might look at this figure as a collection of existing Mentor products…slightly rearranged to make a compelling argument for a “solution” in the I.A. space.  That skinny kid Nucleus is right there, listed on the diagram. Oh, how many times have I asked Mentor why they keep Nucleus around only to get beaten up by the big RTOS kids!

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure.

Figure 1: Mentor’s Industrial Automation Solution for embedded, IoT-enabled systems relies on the Nucleus RTOS, including a secure hypervisor and enhanced security infrastructure. 

After all, you’ll recognize Mentor’s Embedded Linux, the Nucleus RTOS I just mentioned, and the company’s Sourcery debug/analyzer/IDE product suite. All of these have been around for a while, although Nucleus is the grown-up kid in this bunch. (Pop quiz: True or False…Did all three of these products came from Mentor acquisitions? Bonus question: From what company(ies)?)

Into this mix, Mentor is adding new security tools from our friends at Icon Labs, plus hooks to a hot new automation GUI/HMI called Qt. (Full disclosure: Icon Labs founder Alan Grau is one of our security bloggers; however, we were taken by surprise at this recent Mentor announcement!)

Industry 4.0: I.A. meets IoT

According to Mentor’s Director of Product Management for Runtime Solutions, Warren Kurisu (whose last name is pronounced just like my first name in Japanese: Ku-ri-su), I.A. is gaining traction, big time. There’s a term for it: “Industry 4.0”. The large industrial automation vendors—like GE, Siemens, Schneider Electric, and others—have long been collecting factory data and feeding it into the enterprise, seeking to reduce costs, increase efficiency, and tie systems into the supply chain. Today, we call this concept the Internet of Things (IoT) and Industry 4.0 is basically the promise of interoperability between currently bespoke (and proprietary) I.A. systems with smart, connected IoT devices plus a layer of cyber security thrown in.

Mentor’s Kurisu points out that what’s changed is not only the kinds of devices that will connect into I.A. systems, but how they’ll connect in more ways than via serial SCADA or FieldBus links. Industrial automation will soon include all the IoT pipes we’re reading about: Wi-Fi, Bluetooth LE, various mesh topologies, Ethernet, cellular—basically whatever works and is secure.

The Skinny Kid Prevails

Herein lies the secret of Mentor’s Industrial Automation Solution. It just so happens the company has most of what you’d need to connect legacy I.A. systems to the IoT, plus add new kinds of smart embedded sensors into the mix. What’s driving the whole market is cost. According to a recent ARC survey, reduced downtime, improved process performance, reduced  machine lifecycle costs—all of these, and more, are leading I.A. customers and vendors to upgrade their factories and systems.

Additionally, says Mentor’s Kurisu, having the ability to consolidate multiple pieces of equipment, reduce power, improve safety, and add more local, operator-friendly graphics are criteria for investing in new equipment, sensors, and systems.

Mentor brings something to the party in each of these areas:

- machine or system convergence, either by improved system performance or reduced footprint

- capabilities and differentiation, allowing I.A. vendors to create systems different from “the other guys”

- faster time-to-money, done through increased productivity, system design and debug, or anything to reduce the I.A. vendor’s and their customer’s efforts.

Graphic - Industrial Automation Flow

Figure 2: Industrial automation a la Mentor. The embedded pieces rely on Nucleus RTOS, or variations thereof. New Qt software for automation GUI’s plus security gateways from Icon Labs bring security and IoT into legacy I.A. installations.

Figure 2 sums up the Mentor value proposition, but notice how most of the non-enterprise blocks in the diagram are built upon the Nucleus RTOS.

Nucleus, for example, has achieved safety certification by TÜV SÜD complete with artifacts (called Nucleus SafetyCert). Mentor’s Embedded Hypervisor—a foundational component of some versions of Nucleus—can be used to create a secure partitioned environment for either multicore or multiple processors (heterogeneous or homogeneous), in which to run multiple operating systems which won’t cross-pollute in the event of a virus or other event.

New to the Mentor offering is an industry-standard Qt GUI running on Linux, or Qt optimized for embedded instantiations running on—wait for it—Nucleus RTOS. Memory and other performance optimizations reduce the footprint, boot faster, and there are versions now for popular IoT processors such as ARM’s Cortex-Mx cores.

Playground Victory: The Take-away

So if the next step in Industrial Automation is Industry 4.0—the rapid build-out of industrial systems reducing cost, adding IoT capabilities with secure interoperability—then Mentor has a pretty compelling offering. That consolidation and emphasis on low power I mentioned above can be had for free via capabilities already build into Nucleus.

For example, embedded systems based on Nucleus can intelligently turn off I/O and displays and even rapidly drive multicore processors into their deepest sleep modes. One example explained to me by Mentor’s Kurisu showed an ARM-based big.LITTLE system that ramped performance when needed but kept the power to a minimum. This is possible, in part, by Mentor’s power-aware drivers for an entire embedded I.A. system under the control of Nucleus.

And  in the happy ending we all hope for, it looks like the maybe-forgotten Nucleus RTOS—so often ignored by editors like me writing glowingly about Wind River’s VxWorks or Green Hill’s INTEGRITY—well, maybe Nucleus has grown up.  It’s the RTOS ready to run the factory of the future. Perhaps your electricity is right now generated under the control of the nerdy little RTOS that made it big.

Eye diagrams don’t lie: ReDrivers, before and after

USB, HDMI, or analog signals degrade to mush over long signal traces or cables; check out these “before and after” videos for proof.

Note: this particular blog posting is sponsored by Pericom Semiconductor.

I’ve been writing about Pericom Semiconductor’s signal integrity products for the last few months, and it’s been shocking to see digital signals acting like analog audio waveforms and bouncing around like techno music on a DJ’s VU meter.

Noise, attenuation, cross-talk and other high frequency nasties take clean rise/fall edges at the signal source and turn them all sloppy and jittery after only a few interconnects…or long cable runs.

Here’s what a good high frequency SERDES signal should look like (right).

Before and after

What better way to showcase the benefits of signal integrity ReDrivers and adaptive equalization (AEQ) than with some scope pix?

Seeing is believing: check out the following eye diagrams.

#1 USB 3.0 – 5 Gbps never looked so good

Using a standard PC motherboard and a 36-inch trace test board, the degraded (closed “eye”) USB 3.0 signal is shown below. Note that a USB 3.0 receiver at the destination end would not recover data and the system just wouldn’t work.  The cause: PCB vias, traces, connectors, and cabling ruin the 5 Gbps digital signal.

Figure 1 USB closed eye

This USB 3.0 system wouldn’t work due to noise, jitter and out-of-spec signals.

By adding a tiny (2mm x 2mm) USB 3.0 ReDriver amplifier, the figure below shows a restored signal at the receiver and 5 Gbps data flow.  Note the open “eye”, the margin above/below the signal continuum, and the tight timing on the overlapped traces. This is a very clean signal and the system works within specs. (The complete PC motherboard set-up is in this USB 3.0 video. )

Figure 2 USB after ReDriver#2 HDMI seeing is believing

Our “always on” society bombards us with LCD screens everywhere we go. HDMI is the preferred video source in home theatre, conference center, digital signage, and other remote LCD setups. Second and third screens connected to modern laptops also rely on HDMI. But at 2.5 Gbps, HDMI is a serial standard that’s highly susceptible to signal integrity problems.

Using a PC’s display port as the video source, an extreme case was rigged in a lab using a long 30 m HDMI cable. The SI results are predictably bad, as shown below.

Figure 3 HDMI closed eyeAdding a signal recovery and conditioning HDMI ReDriver, the unrecognizable signals from above were astoundingly improved to an open “eye” with ample margin, as shown below. HDMI ReDrivers not only return the signal to acceptable levels, they come in flavors that add desirable video features such as equalization, splitters, level shifts, color correction, and more. (Short video with demo set-up is here.)

Figure 4 HDMI open eye#3 CCTV analog video

While not a digital signal, I threw this one in because it’s a real world example of poor analog signal integrity that you can see on a screen.

Scenario: Most closed-circuit TV surveillance systems are still analog because they’re cheap, and systems are characterized by long analog cable runs from source to viewing destination. PCB connectors, traces and video CODEC ICs further degrade signals.

The lab set-up: 500m COAX cable; signal generator; AEQ “on” and “off”. In the “Before” picture lousy, lossy signals and attenuation create a distorted and “wavey” test pattern. Unlike degraded digital (serial) USB 3.0 and HDMI signals that won’t recover at the receiver, SI-affected analog signals still carry information but the output is unacceptable in surveillance applications.

Figure 5 distorted analogAfter a video decoder is added at the receiving end and adaptive equalization algorithms are enabled (AEQ = “on”), the result is shown below. The image is recovered and enhanced with no loss of detail. This is after a whopping 500 m (~ 1/3 mile) of cable. (Video is here.)

Figure 6 AEQ afterConclusions

Signals degrade, and the faster things clock or the longer are the traces/cables, the worse things get. In gigahertz signals, ReDrivers from companies like Pericom Semiconductor clean up signals and make usable PCIe, USB 3.0, HDMI and other standards. And even in pure analog signals, video decoders with DSP-based AEQ can dramatically clean up signals.