Today’s Tech News…from the News: Slow Growth Forecast

We stitch together a technology forecast based upon several news snippets.

It’s a conscious decision for me to pull my head out of the technology headlines of RSS feeds, embedded Newsletters, and various analyst reports. In fact, sometimes I read the broader media just to follow politics (Yikes! Why bother.), local news (it’s always grim), and financial minutia. For the latter, I turn to FORTUNE Magazine—for which I actually pay real money.

It turns out that U.S. productivity is slowing, according to the article “We Were Promised a 20-Hour Workweek.” Since 2010 our productivity—output per labor hour—is up by only 0.5 percent per year, per the U.S. Bureau of Labor Statistics. This is bad, especially since it is usually 1.6% (1974 – 1994) or 2.8% (1995 – 2004). Productivity is a function of either lower labor costs (the denominator) or higher output (numerator). Technology can affect both numbers.

The Internet of Things: Hope, or Hype? (Courtesy: Wiki Commons; Author: Wilgengebroed.)

The Internet of Things: Hope, or Hype? (Courtesy: Wiki Commons; Author: Wilgengebroed.)

Demand Side

Higher productivity (output) has historically been driven by increases wrought by technology, either by improving the work itself or by creating new industries that created new work. Think steam engine, railroads, industrial revolution, electricity, automobiles, women in the workforce, PCs, The Internet and smartphones.

Check the stats on the last three and while the Internet grows each year, it’s building out slowly into lesser-known and often primitive societies with little money to spend. PCs and smartphones? Flat, baby. So on the demand side, we see that demand for technology is down. Thin demand means tech isn’t working to increase productivity all that much.

Supply Side

On the other side of the coin, we can also see that technology supply is also down. It makes sense…no demand means no supply, unless the stuff is being stockpiled someplace in warehouses. According to a Gartner statement in April 2016, worldwide semiconductor revenues declined 0.6% in 2016. Worrisome, but it’s only April. But the news is worse: this decline is on track to be the second year in a row, following 2015’s 2.3% decline.

So accept that both technology supply and demand are down, and this affects productivity.

Is the root of this a complete failure on the part of tech companies to come up with some “next big thing”?  I believe that indeed is the case. We’ve gone from PCs to smartphones to tablets…and now we’re onward to incremental improvements in each of these. Ultrabook/2:1s/convertibles; larger-screen “phablet” smartphones; thinner PCs and Macs with better processors; and in the case of Windows, touchscreen machines (come on, Apple…give us a touchscreen or an external mouse on an iPad Pro!).

My point? What’s the next great technology product or disruptive thingy waiting in the wings that can increase productivity output?

Cue the Music: IoT to the Rescue?

Experts, journalists and pundits (like me) point to the 20B-50B Internet of Things (IoT) doodads that are “out there.” We have smart homes on the horizon, and I like (not love) my Next thermostat. Self-driving, V2V/V2I (vehicle-to-vehicle/infrastructure) cars might foment something disruptive by saving lives (more workers, less insurance cost) or reducing commute time so workers can be at their desks more. This would increase the numerator (productivity) and denominator (tech demand). In fact, the IoT might revolutionize lots of things in our world.

Steve Case, former CEO of AOL and clearly a Guru when it comes to technology revolutions, believes the third wave of the Internet will soon be upon us. In a FORTUNE article entitled “Steve Case Wants Tech to Love the Government” he forecasts the Internet as being “more seamlessly and pervasively in every aspect of our lives.” This, in fact, is the promise of the IoT.

Yet I’m not ready to buy it all yet. As friend and colleague Ray Alderman, CEO of VITA, recently pointed out to me: a lot of the innovation in technology lately is in software and apps. These (usually) cost little to build but often don’t create major disruptive market changes. Will the next version of Word change your life? How about that Evernote update, or Adobe moving to a cloud-based model? Not even the radical change in CRM wrought by SalesForce.com caused all the salespeople of the free world to sing praise and meet their quotas.

So after a lunchtime reading roundup of the aforementioned stories and headlines, my conclusion is this: we’re slowing down, folks. I’m afraid corporate profits, the stock market, wages, and unemployment are all watching carefully.

Or…we could wait for tomorrow’s news. There’s bound to be a different conclusion to be reached.

Semiconductor Consolidation Continues…in Reverse This Time

Mercury Computer buys three mil-related Microsemi businesses. This acquisition is the reverse of most semiconductor M&A: an IC company sold to a systems supplier. 

Mercury logomicrosemi-logoAre you all caught up on the consolidation going on in the semiconductor industry? Heck, it looks like the airline industry of 2010 where you can’t remember which company used to be what. I mean…who bought US Air? Right; they actually bought American Airlines. Funny how the American logo lingers. But I digress.

Same deal in semiconductors. Intel has Altera in a $16.7 billion megadeal that closed in December 2015. NXP bought Freescale ($11.8 billion)—and Freescale itself used to be Motorola Semiconductor, co-creator (with IBM) of the PowerPC. Remember the Power architecture? Didn’t think so.

UK-based Dialog Semiconductor, whoever they are, bought nifty Atmel for $4.6 billion. I always liked Atmel’s broad product line and innovation in the 8-bit arena. (Hear that Microchip? Someone’s got a bead on you.)  According to CRN Magazine, 2015 was a watershed year in semiconductor M&A. There are several more I’m not mentioning.

Why was it so? Because the darned industry ain’t growing much now that the PC has matured and flattened, along with tablets and smartphones. The IoT offers promise (or hype), while the increasingly frightening and unstable world has people interested in defense spending again. What’s it been—two years since we all decided we didn’t need no stinkin’ military spending? As I write this, Europe just witnessed another devastating terrorist attack. God help them.

Through it all, companies like Mercury Systems (formerly Mercury Computer Systems) stuck to their guns—so to speak—and remained focused on building high-performance military boards, systems, software and IP. That’s why it should come as no surprise that this time, a board-level company like Mercury was the acquirer of three Microsemi mil-focused businesses.

Announced just today, 23 March 2016, Mercury Systems will spend $300 million to buy the embedded security, RF and microwave, and custom microelectronics business units from Microsemi. According to Mercury, the adjusted income of the businesses was around $28 million (EBITDA), making this a 10x buy and quite spendy among recent semiconductor deals.

But there’s much more to this deal than meets the eye, and there will likely be ramifications, too. Firstly, I’ve got some personal familiarity with these Microsemi companies. The embedded group was once the specialty military memory supplier White Electronics, a large supplier to not only the DoD but to many of Mercury’s then-competitors like Dy4 Systems (now Curtiss-Wright). Under Microsemi, the business unit has expanded its offerings by combining technology from the Security Solutions group in West Lafayette, IN.

The Security Group is home to Microsemi’s commercial Whitebox CRYPTOtechnology used to keep secure sensitive IP (such as drug recipes or Hollywood CGI movie originals). Applied to nonvolatile memories and combined with physically-unclonable function (PUF) semiconductors for keys, Microsemi has been working on super-secure military ICs.  It’s a sure bet that all of this IP is very attractive to Mercury Systems, their roadmap, and to their DoD customers.  In fact, under Mercury the technology may thrive much faster as Mercury’s sales channels are more plugged into the end customer and key programs than a regular semiconductor sales force.

That leaves the third and final Microsemi business unit: the Camarillo, CA RF and Microwave group. I don’t know much about these guys as RF isn’t my thing. At first I thought this was the former Condor Engineering, but they are in Santa Barbara, CA, specialize in MIL-STD-1553 and ARINC-429 ICs, and were acquired by GE Intelligent Platforms (now Abaco) a few years ago. Actually, this is the former AML Communications company purchased by Microsemi for $28 million in 2011. Guess it wasn’t a core fit for Microsemi.

But it will be a fit for Mercury Systems, building up their own RF expertise steadily since buying RF and microwave component and systems supplier Micronetics in 2012. Mercury has a growing RF, microwave and sensor business (called RF/M Components) and the group from Microsemi should fit nicely.

In fact, Mercury Systems has been on a strategic shopping trip since their Echotek purchase 11 years ago. Back then, we all were shocked—even insulted!—that Mercury would take a board (and IC) supplier of data acquisition and mixed signal modules off the market. As with Microsemi’s products today, back then many of the COTS industry’s leading board companies purchased from Echotek.

So this acquisition is the reverse of most semiconductor M&A: an IC company went to a systems supplier. It remains to be seen if Mercury’s new Microsemi groups will continue to sell product to Mercury’s competitors. Oh, sure: this acquisition is too small ($300 million) to attract any DoJ antitrust attention. But I wonder if the likes of Curtiss-Wright or Abaco or any other COTS company that might be doing busiess with these companies will continue to buy from their competitor Mercury.

Still: I’ve got to hand it to Mercury Systems. This is a company that is laser focused on their core competencies of high-performance computing systems. And those systems need the best specialty ICs and intellectual property.

Editor’s note: the author has, or has had material relationships with the companies mentioned in this article. The opinions quoted are subjective and entirely the author’s own.

 

From ARM TechCon: Two Companies Proclaim IoT “Firsts” in mbed Zone

UPDATE: Blog updated 14 Dec 2015 to correct typos in ARM nomenclature. C2

Showcased at ARM’s mbed Zone, Silicon Labs and Zebra Technologies show off two IoT “Firsts”.

ARM’s mbed Zone—a huge dedicated section on the ARM TechCon 2015 exhibit floor—is the place where the hottest things for ARM’s new mbed OS are shown. ARM’s mbed is designed to make it easy to securely mesh IoT devices and their data to the cloud. Introduced at TechCon 2014 a year ago, mbed was just a concept; now it’s steps closer to reality.

Watch This!

The wearables market is one of three focus areas for ARM’s development efforts, along with Smart Cities and Smart Home. ARM’s first wearable dev platform is a smart watch worn by ARM IoT Marketing VP Zach Shelby and shown in Figure 1. It’s based on ARM’s wearables reference platform featuring mbed OS integration—with a key feature being power management APIs.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

Figure 1: ARM’s smart watch development proof-of-concept, worn by ARM IoT Marketing VP Zach Shelby at ARM TechCon 2015.

According to IoT MCU and sensor supplier Silicon Labs, which helped co-develop the APIs with ARM, they “provide a foundation for all peripheral interactions in mbed OS” but are designed with low power in mind and long battery life. No one wants to charge a smart watch during the day: that’s a non-starter. The APIs assure things like minimal polling or interrupts, placing peripherals in deep sleep modes, and basically wringing every power efficiency out of systems designed for long battery life. Mbed OS clearly continues ARM’s focus on low power, but emphasizes IoT ease-of-design.

In the mbed Zone, Silicon Labs was showing off their version of ARM’s smart watch which they call Thunderboard Wear. It’s blown up into demo board size and complete with Silicon Labs’ custom-designed blood pressure and ambient light sensors (Figure 2). The board is based on the company’s ARM Cortex-M3-based EFM32 Giant Gecko SoC.  Silicon Labs’ main ARM TechCon announcement—and the reason they’re in the mbed OS Zone—is that all Gecko MCUs now support mbed OS. We’ll dig into what this means technically in a future post.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

Figure 2: Silicon Labs’ version of ARM’s smart watch—blown up into demo board size and complete with Cortex-M3 Giant Gecko MCU and BP sensor. The rubber straps remind that this is still “wearable”, though only sort-of.

P1060552

“Hello Chris”

Further proving the growing veracity of mbed OS and its ecosystem is the Zebra Technologies “wireless mbed to cloud” demo shown during Atmel’s evening reception at ARM TechCon (they’re also in the mbed Zone). Starting with Atmel’s ATSAMW25-PRO demo board plus display add-on (Figure 4) containing ARM Cortex-M3 and Cortex–M4 Atmel SoCs, Zebra demonstrated communicating directly from a console to the WiFi-equipped demo.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Figure 4: Zebra Technologies demonstrates easy wireless connectivity to IoT devices using Atmel’s SAMW25 MCU board and OLED1 expansion board.

Typing “Hello Chris” into Zebra’s Zatar browser-based software console (Figure 5), the sentence appeared on the tiny display almost immediately. More than a hat trick, the demo shows the promise of the IoT, ARM cores, and the interoperability of mbed OS connected all the way back to the cloud and the Zatar device portal.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Figure 5: Zebra’s Zatar IoT cloud console dashboard.

Zebra’s Zatar cloud service works with Renesas’s Synergy IoT platform, Freescale’s Kinetis MCU, and of course Atmel’s SoC’s (will Atmel also create their own end-to-end ecosystem?). The  Zebra “IoT Kit” demoed at TechCon is “the first mbed 3.0 Wi-FI kit that offers developers a prototype to quickly test drive IoT,” said Zebra Technologies. If you’re familiar with ARM’s mbed OS connectivity/protocol stack diagram, Zebra uses the COAP protocol to connect devices to the cloud. The company was a COAP co-developer.

The significance of the demo is multifold: quick development time using established Atmel hardware; cloud connectivity using Wi-Fi; an open-standard IoT protocol, and the solution is compliant with ARM’s latest mbed OS 3.0.

The fact that the Zatar console easily connects to multiple vendor’s processors means thousands or tens of thousands of IoT nodes can be quickly controlled, updated, and data queried with minimal effort. In short: creating wireless IoT products and using them just got a whole lot easier.

Zebra will be selling the Zebra ARM mbed IoT Kit for Zatar via distributors and more information is available on their website at www.zatar.com/IoTKit.

 

AMD Targets Embedded Graphics

As the PC market flounders, AMD continues focus on embedded, this time with three (3) new GPU families.

The widescreen LCD digital sign at my doctor’s office tells me today’s date, that it’s flu season, and that various health maintenance clinics are available if only I’d sign up. I feel guilty every time.

An electronic digital sign, mostly text based. (Courtesy: Wikimedia Commons.)

An electronic digital sign, mostly text based. (Courtesy: Wikimedia Commons.)

These kind of static, text-only displays are not the kind of digital sign that GPU powerhouses like AMD are targeting. Microsoft Windows-based text running in an endless loop requires no graphics or imaging horsepower at all.

Instead, high performance is captured in those Minority Report multimedia messages that move with you across multiple screens down a hallway; the immersive Vegas-style electronic gaming machines that attract senior citizens like moths to a flame; and the portable ultrasound machine that gives a nervous mother the first images of her baby in HD. These are the kinds of embedded systems that need high-performance graphics, imaging, and encode/decode hardware.

AMD announced three new embedded graphics families, spanning low power (4 displays) ranging up to 6 displays and 1.5 TFLOPs of number crunching for high-end GPU graphics processing.

AMD announced three new embedded graphics families, spanning low power (4 displays) ranging up to 6 displays and 1.5 TFLOPs of number crunching for high-end GPU graphics processing.

Advanced Micro Devices wants you to think of their GPUs for your next embedded system.

AMD just announced a collection of three new embedded graphics processor families using 28nm process technology designed to span the gamut from multi-display and low power all the way up to a near doubling of performance at the high end.  Within each new family, AMD is looking to differentiate from the competition at both the chip- and module/board-level. Competition comes mostly from Nvidia discrete GPUs, although some Intel processors and ARM-based SoCs cross paths with AMD. As well, AMD is pushing its roadmap quickly away from previous generation 40nm GPU devices.

Comparison between AMD 40nm and 28nm embedded GPUs.

Comparison between AMD 40nm and 28nm embedded GPUs.

A Word about Form Factors

Sure, AMD’s got PC-card plug-in boards in PCI Express format—long ones, short ones, and ones with big honking heat sinks and fans and plenty of I/O connections. AMD’s high-end embedded GPUs like the new E8870 Series are available on PCIe and boast up to 1500 GFLOPs (single precision) and 12 Compute Units. They’ll drive up to 6 displays and burn up to 75W of power without an on-board fan, yet since they’re on AMD’s embedded roadmap—they’ll be around for 5 years.

An MXM (Mobile PCIe Module) format PCB containing AMD’s mid-grade E8950 GPU.

An MXM (Mobile PCIe Module) format PCB containing AMD’s mid-grade E8950 GPU.

Compared to AMD’s previous embedded E8860 Series, the E8870 has 97% more 3DMark 11 performance when running from 4GB of onboard memory. Interestingly, besides the PCIe version—which might only be considered truly “embedded” when plugged into a panel PC or thin client machine—AMD also supports the MXM format.  The E8870 will be available on the Type B Mobile PCI Express Module (MXM) that’s a mere 82mm x 105mm and complete with memory, GPU, and ancillary ICs.

Middle of the Road

For more of a true embedded experience, AMD’s E8950MXM still drives 6 displays and works with AMD’s EyeFinity capability of stitching multiple displays together in Jumbotron fashion. Yet the 3000 GFLOPs (yes, that’s 3000 GFLOPs peak, single precision) little guy still has 32 Compute Units, 8 GB of GPU memory, and is optimized for 4K (UHD) code/decoding. If embedded 4K displays are your thing, this is the GPU you need.

Hardly middle of the road, right? Depending upon the SKU, this family can burn up to 95W and is available exclusively on one of those MXM modules described above. In embedded version, the E8950 is available for 3 years (oddly, two fewer than the others).

Low Power, No Compromises

Yet not every immersive digital sign, MRI machine, or arcade console needs balls-to-the-wall graphics rendering and 6 displays. For this reason, AMD’s E6465 series focuses on low power and small form factor (SFF) footprint. Able to drive 4 displays and having a humble 2 Compute Units, the series still boasts 192 GFLOPs (single precision), 2 GB of GPU memory, 5 years of embedded life, but consumes a mere 20W.

The E6465 is available in PCIe, MXM (the smaller Type A size at 82mm x 70mm), and a multichip module. The MCM format really looks embedded, with the GPU and memory all soldered on the same MCM substrate for easier design-in onto SFFs and other board-level systems.

More Than Meets the Eye

While AMD is announcing three new embedded GPU families, it’s easy to think the story stops with the GPU itself. It doesn’t. AMD doesn’t get nearly enough recognition for the suite of graphics, imaging, and heterogeneous processing software available for these devices.

For example, in mil/aero avionics systems AMD has a few design wins in glass cockpits such as with Airbus. Some legacy mil displays don’t always follow standard refresh timing, so the new embedded GPU products support custom timing parameters. Clocks like Timing Standard, Front Porch, Refresh Rate and even Pixel Clocks are programmable—ideal for the occasional non-standard military glass cockpit.

AMD is also a strong supporter of OpenCL and OpenGL—programming and graphics languages that ease programmers’ coding efforts. They also lend themselves to creating DO-254 (hardware) and DO-178C (software) certifiable systems, such as those found in Airbus military airframes. Airbus Defence has selected AMD graphics processors for next-gen avionics displays.

Avionics glass cockpits, like this one from Airbus, are prime targets for high-end embedded graphics. AMD has a design win in one of Airbus' systems.

Avionics glass cockpits, like this one from Airbus, are prime targets for high-end embedded graphics. AMD has a design win in one of Airbus’ systems.

Finally, AMD is the founding member of the HSA Foundation, an organization that has released heterogeneous system standard (HSA) version 1.0, also designed to make programmers’ jobs way easier when using multiple dissimilar “compute engines” in the same system. Companies like ARM, Imagination, MediaTek and others are HSA Foundation supporters.

 

 

Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

A Sign of the Times

AMD’s FirePro series lights up Godzilla-sized Times Square digital sign.

[Editor's note: blog updated 8-18-15 to remove "Radeon" and make other corrections.]

They say the lights are bright on Broadway, and they ain’t kidding.  A new AMD-powered digital sign makes a stadium Jumbotron look small.

I’ve done a few LAN parties and appreciate an immersive, high-res graphics experience. But nothing could have prepared me for the whopping 25,000 square feet of graphics in Times Square powered by AMD’s FirePro series (1535 Broadway, between 45th and 46th Streets).

The UltraHD media wall is the ultimate digital sign, comprising the equivalent of about 24 million RGB LED pixels. The media wall is a full city block long by 8 stories high! Designed and managed by Diversified Media Group, the sign is thought to be the largest of its kind in the world, and certainly the largest in the U.S.

AMD-powered digital sign will soon grace Times Square, boasting America's largest digital sign.

Three AMD FirePro UltraHD graphics cards drive the largest digital sign in the world.
This view of Times Square shows the commercial importance of high-res digital signs. [1 Times Square night 2013, by Chensiyuan; Licensed under GFDL via Wikimedia Commons.]

The combined 10,048 x 2,368 pixel “display” is powered by a mere three AMD FirePro graphics cards. Each card drives six sections of the overall display wall. The whole UHD experience is so realistic because of AMD’s Graphics Core Next architecture that executes billions of operations in parallel per cycle.

The Diversified Media Group’s Times Square digital sign is powered by AMD FirePro graphics, shown here under construction. [Courtesy: Diversified Media Group.]

The Diversified Media Group’s Times Square digital sign is powered by AMD FirePro graphics, shown here under construction. [Courtesy: Diversified Media Group.]

AMD’s well-proven EyeFinity capability sends partitioned images to various display zones (up to six), all coordinated across the three graphics cards using the FirePro S400 synchronization module.

The FirePro graphics family was introduced at NAB2014 specifically for high-res, media intensive applications like this. There’s 16 GB of GDDR5 memory, PCIe 3.0 for high-speed IO, and the 28nm process technology used in the Graphics  Core Next architecture balances 3D rendering with GPGPU computation. It all adds up to the performance needed for the Times Square “mombo-tron” skyscraper display.

Only three AMD’s W600 FirePro graphics cards like these power America’s largest digital sign in Times Square.

Only three AMD’s W600 FirePro graphics cards like these power America’s largest digital sign in Times Square.

According to the New York Times, approximately 300,000 people each day will see the sign, advertising that might sell for as much as $2.5 million for four weeks–certainly some pretty expensive real estate, even for NYC. So the sign must look astounding and work flawlessly.

This blog was sponsored by AMD.

 

ESC 2015 Kicks Off With Embedded Security Finally Getting Attention

On Day 1, ESC vendors emphasize security. At (long) last.

I have for years been sounding the klaxon on the need for embedded security. From designs that are locked down and not easily accessed—locally or remotely—to protecting the data at rest and in flight, designers need to start thinking about security in their embedded systems.

As well, I’ve also harped on the need to write secure embedded code and to verify it using static analysis (and other) tools. Witness the OpenSSL “HeartBleed” server fiasco last year where bad code let servers dump portions of their memory to hackers.

This year’s ESC 2015 showed plenty of security-conscious vendors ready to help embedded designers.

Security on the Side

On Day 1 of ESC Santa Clara (a smaller event held this year at the Santa Clara Marriott) I’d only just gotten my Press badge when I noticed a bag of new USB cables marked “Free”. This is a great conversation starter and I asked the guy: “What’s with the free cables?” He introduced himself as Colin O’Flynn of NewAE Technology and the cables were excess from a USB preso he’d given.

ChipWhisperer-Lite, a $250 side channel analysis tool.

ChipWhisperer-Lite, a $250 side channel analysis tool.

NewAE, it turns out, is looking to sell really cheap side channel power analysis tools that help identify attack surfaces in embedded hardware. Their $250 ChipWhisperer-Lite was a Kickstarter project, and competing products from Cryptography Research Inc (CRI) cost maybe 40x that. CRI has been an ESC exhibitor in the past and always wows attendees with demos that correctly identify passwords in devices simply by monitoring minute power fluctuations as the system authenticates. Credible and spook-like stuff.

If side channel power analysis could be done for $250 then embedded designers should start doing it.

Root of Trust

Also in attendance on Day 1 was the Trusted Computing Group—the folks who bring us the TPM (trusted platform module). I don’t know much about this IC applied to embedded, but I see it listed on many semiconductor company presentations such as Freescale and Intel. The TPM shows up in some consumer computer gear, though frequently in hardware targeted at the enterprise. Are TPM ICs used in embedded?

They are, said Steve Hanna, head of several TCG working groups and Sr. Principal Technical Marketing guy for Chip Card & Security at Infineon. He briefed me on several TCG announcements at ESC 2015. In fact, TCG is targeting end-node IoT devices with a different set of security goals.

Where confidentiality and integrity are the goals in laptops and enterprise systems, the TPM establishes a hardware root of trust primarily to protect crypto keys for secure boot, data storage and data transfer.

But in the IoT, it’s availability that matters most. If a networked pipeline has been hacked and the valves all opened, the primary concern is accessing those valves to shut ‘em off. That’s not to say that protection from hacking a home security camera or baby monitor isn’t also a goal (Figure).

The Trusted Computing Group sees TPMs all over embedded systems.

The Trusted Computing Group sees TPMs all over embedded systems.

For ESC2015, TCG has announced the 2.0 version of the TPM spec (ISO IEC 11889:2025) that includes, among several enhancements, support for more TPM profiles to allow vendor (and system) flexibility. Of note, says TCG’s Steve Hanna, is the addition of “cryptographic agility”—which means the ability to add more crypto algorithms as the need evolves. This is found in Infineon’s new OPTIGA Trust E TPM with Elliptic Curve Cryptography, announced at ESC2015 today.

Finally, TCG also started rolling out their plans for the IoT that start with the Architect’s Guide to IoT, one of eight Guides currently available.  Clearly, more are planned. Watch this space.

Infineon’s OPTIGA Trust E

Day 1’s “security” focus concluded with Infineon’s addition of the OPTIGA Trust E IC to the company’s line of turn-key authentication security ICs. OPTIGA runs the gamut from easy-to-use crypto authentication at the low end, to the Common Criteria, EAL 5+ programmable OPTIGA Trust P at the high end.OPTIGA-Trust-E

Sandwiched in the middle is the announced enhanced “E” version focusing in on IoT nodes, medical, smart home, consumer and other middle-of-the-road consumer and industrial automation applications.

According to Infineon, key storage, root of trust and crypto capabilities are essential in all areas of IoT—including the home (see Figure). A fake server, for example, could be used to send false commands and open an IoT-controlled garage door or start a remotely-enabled automobile.

Infineon's view of the embedded IoT.

Infineon’s view of the embedded IoT.

As Day 1 wound down, I was already receiving notices of security-related Day 2 announcements. It seems like security has hit vendors’ radar screens.

Will security attract the designers of embedded systems now that ICs, tools and specs are appearing?

ESC 2015 Day 2 looms.

 

AMD’s “Beefy” APUs Bulk Up Thin Clients for HP, Samsung

There are times when a tablet is too light, and a full desktop too much. The answer? A thin client PC powered by an AMD APU.

Note: this blog is sponsored by AMD.

A desire to remotely access my Mac and Windows machines from somewhere else got me thinking about thin client architectures. A thin “client” machine has sufficient processing for local storage and display—plus keyboard, mouse and other I/O—and is remotely connected to a more beefy “host” elsewhere. The host may be in the cloud or merely somewhere else on a LAN, sometimes intentionally inaccessible for security reasons.

Thin client architectures—or just “thin clients”—find utility in call centers, kiosks, hospitals, “smart” monitors and TVs, military command posts and other multi-user, virtualized installations. At times they’ve been characterized as low performance or limited in functionality, but that’s changing quickly.

They’re getting additional processing and graphics capability thanks to AMD’s G-Series and A-Series Accelerated Processing Units (APUs). By some analysts, AMD is number one in thin clients and the company keeps winning designs with its highly integrated x86 plus Radeon graphics SoCs: most recently with HP and Samsung.

HP’s t420 and mt245 Thin Clients

HP’s ENERGY STAR certified t420 is a fanless thin client for call centers, Desktop-as-a-service and remote kiosk environments (Figure 1). Intended to mount on the back of a monitor such as the company’s ProDisplays (like you see at the doctor’s office), the unit runs HP’s ThinPro 32 or Smart Zero Core 32 operating system, has either 802.11n Wi-Fi or Gigabit Ethernet, 8 GB of Flash and 2 GB of DDR3L SDRAM.

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

USB ports for keyboard and mouse supplement the t420’s dual display capability (DVI-D  and VGA)—made possible by AMD’s dual-core GX-209JA running at 1 GHz.

Says AMD’s Scott Aylor, corporate vice president and general manager, AMD Embedded Solutions: “The AMD Embedded G-Series SoC couples high performance compute and graphics capability in a highly integrated low power design. We are excited to see innovative solutions like the HP t420 leverage our unique technologies to serve a broad range of markets which require the security, reliability and low total cost of ownership offered by thin clients.”

The whole HP thin client consumes a mere 45W and according to StorageReview.com, will retail for $239.

Along the lines of a lightweight mobile experience, HP has also chosen AMD for their mt245 Mobile Thin Client (Figure 2). The thin client “cloud computer” resembles a 14-inch (1366 x 768 resolution) laptop with up to 4GB of SDRAM and a 16 GB SSD, the unit runs Windows Embedded Standard 7P 64 on AMD’s quad core A6-6310 APU with Radeon R4 GPU. There are three USB ports, 1 VGA and 1 HDMI, plus Ethernet and optional Wi-Fi.

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Like the t420, the mt245 consumes a mere 45W and is intended for employee mobility but is configured for a thin client environment. AMD’s director of thin client product management, Stephen Turnbull says the mt245 targets “a whole range of markets, including education and healthcare.”

At the core of this machine, pun intended, is the Radeon GPU that provides heavy-lifting graphics performance. The mt245 can not only take advantage of virtualized cloud computing, but has local moxie to perform graphics-intensive applications like 3D rendering. Healthcare workers might, for example, examine ultrasound images. Factory technicians could pull up assembly drawings, then rotate them in CAD-like software applications.

Samsung Cloud Displays

An important part of Samsung’s displays business involves “smart” displays, monitors and televisions. Connected to the cloud or operating autonomously as a panel PC, many Samsung displays need local processing such as that provided by AMD’s APUs.

Samsung’s recently announced (June 17, 2015) 21.5-inch TC222W and 23.6-inch TC242W also use AMD G-Series devices in thin client architectures. The dual core 2.2 GHz GX222 with Radeon HD6290 powers both displays at 1920 x 1080 (HD) and provides six USB ports, Ethernet, and runs Windows Embedded 7 out of 4GB of RAM and 32 GB of SSD.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Said Seog-Gi Kim, senior vice president, Visual Display Business, Samsung Electronics, “Samsung’s powerful Windows Thin Client Cloud displays combine professional, ergonomic design with advanced thin-client technology.” The displays rely on the company’s Virtual Desktop Infrastructure (VDI) through a centrally managed data center that increases data security and control (Figure 3). Applications include education, business, healthcare, hospitality or any environment that requires virtualized security with excellent local processing and graphics.

Key to the design wins is the performance density of the G-Series APUs, coupled with legacy x86 software interoperability. The APUs–for both HP and Samsung–add more beef to thin clients.

 

Proprietary “Standards” Feel Way Too Controlling

In the embedded industry, we’re surrounded by standards. Most are open, some are not. Open is better.

I recently got torqued at getting held over the barrel once again for a proprietary embedded doodad–an Apple Lightning cable. All my old Apple cables are now useless, and Apple doesn’t use the very-open (and cheap!) micro-USB cable that’s standard everywhere.

Some common cables: only the Apple Lightning cable on the left is proprietary. (Courtesy: Wikimedia Commons, by Algr).

Some common cables: only the Apple Lightning cable on the left is proprietary. (Courtesy: Wikimedia Commons, by Algr).

Embedded boards also come with myriad open and not-so-open standards. If it’s not a standard managed by a consortium–say, PCIe-104 OneBank by the PC/104 Consortium, or SMARC by SGET–designers should carefully weigh the pro’s and con’s of choosing that vendor’s “standard”. Often, it makes sense. But beware of the commitment you’re making.

This is one reason that Linux, the world’s popular open source software, became so prevalent: devs and their employers could control their on destiny.

Check out my full rant here: http://bit.ly/1U4PYcS

 

Move Over Arduino, AMD and GizmoSphere Have a “Jump” On You with Graphics

The UK’s National Videogame Arcade relies on CPU, graphics, I/O and openness to power interactive exhibits.

Editor’s note: This blog is sponsored by AMD.

When I was a kid I was constantly fascinated with how things worked. What happens when I stick this screwdriver in the wall socket? (Really.) How come the dinner plate falls down and not up?

Humans have to try things for ourselves in order to fully understand them; this sparks our creativity and for many of us becomes a life calling.

Attempting to catalyze visitors’ curiosity, the UK’s National Videogame Arcade (NVA) opened in March 2015 with the sole intention of getting children and adults interested in videogames through the use of interactive exhibits, most of which are hands-on. The hope is that young people will first be stimulated by the games, and secondly that they someday unleash their creativity on the videogame and tech industries.

The UK's National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

The UK’s National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

 Might As Well “Jump!”

The NVA is located in a corner building with lots of curbside windows—imagine a fancy New York City department store but without the mannequins in the street-side windows. Spread across five floors and a total of 33,000 square feet, the place is a cooperative effort between GameCity (a nice bunch of gamers), the Nottingham City Council, and local Nottingham Trent University.

The goal of pulling in 60,000 visitors a year is partly achieved by the NVA’s signature exhibit “Jump!” that allows visitors to experience gravity (without the plate) and how it affects videogame characters like those in Donkey Kong or Angry Birds. Visitors actually get to jump on the Jump-o-tron, a physics-based sensor that’s controlled by GizmoSphere’s Gizmo 2 development board.

The Jumpotron uses AMD's G-Series SoC combining an x86 and Radeon GPU.

The Jumpotron uses AMD’s G-Series SoC combining an x86 and Radeon GPU.

The heart of Gizmo 2 is AMD’s G-Series APU, combining a 64-bit x86 CPU and Radeon graphics processor. Gizmo 2 is the latest creation from the GizmoSphere nonprofit open source community which seeks to “bring the power of a supercomputer and the I/O capabilities of a microcontroller to the x86 open source community,” according to www.gizmosphere.org.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

Jump!” allows visitors to experience—and tweak—gravity while examining the effect upon on-screen characters. The combination requires extensive processing—up to 85 GFLOPS worth—plus video manipulation and display. What’s amazing is that “Jump!”, along with many other NVA exhibits, isn’t powered by rackmount servers but rather by the tiny 4 x 4 inch Gizmo 2 that supports Direct X 11.1, OpenGL 4.2x, and OpenCL 1.2. It also runs Windows and Linux.

AMD’s “G” Powers Gizmo 2

Gizmo 2 is a dense little package, sporting HDMI, Ethernet, PCIe, USB (2.0 and 3.0), plus myriad other A/V and I/O such as A/D/A—all of them essential for NVA exhibits like “Jump!” Says Ian Simons of the NVA, “Gizmo 2 is used in many of our games…and there are plans for even more games embedded into the building,” including furniture and even street-facing window displays.

Gizmo 2’s small size and support for open source software and hardware—plus the ability to develop on the gamer’s Unity engine—makes Gizmo 2 the preferred choice. Yet the market contains ample platforms from which to choose. Arduino comes to mind.

Gizmo 2's schematic.

Gizmo 2′s schematic. The x86 G-Series SoC is loaded with I/O.

Compared to Arduino, the AMD G Series SoC (GX-210HA) powering Gizmo 2 is orders of magnitude more powerful, plus it’s x86 based and running at 1.0GHz (the integral GPU runs at 300 MHz). This makes the world’s cache of Intel-oriented, Windows-based software and drivers available to Gizmo 2—including some server-side programs. “NVA can create projects with Gizmo 2, including 3D graphics and full motion video, with plenty of horsepower,” says Simons. He’s referring to some big projects already installed at the NVA, plus others in the planning stages.

“One of things we’d like to do,” Simons says, “is continue to integrate Gizmo 2 into more of the building to create additional interactive exhibits and displays.” The small size of Gizmo 2, plus the wickedly awesome performance/graphics rendering/size/Watt of the AMD G-Series APU, allows Gizmo 2 to be embedded all over the building.

See Me, Feel Me

With a nod to The Who’s (1) rock opera Tommy, the NVA building will soon have more Gizmo 2 modules wired into the infrastructure, mixing images and sound. There are at least three projects in the concept stage:

  • DMX addressable logic in the central stairway.  With exposed cables and beams, visitors would be able to control the audio, video, and possibly LED lighting of the stairwell area using a series of switches. The author wonders if voice or other tactile feedback would create all manner of immersive “psychedelic” A/V in the stairwell central hall.
  • Controllable audio zones in the rooftop garden. The NVA’s Yamaha-based sound system already includes 40 zones. Adding AMD G-Series horsepower to these zones would allow visitors to create individually customized light/sound shows, possibly around botanical themes. Has there ever been a Little Shop of Horrors videogame where the plants eat the gardener? I wonder.
  • Sidewalk animation that uses all those street-facing windows to animate the building, possibly changing the building’s façade (Star Trek cloak, anyone?) or even individually controlling games inside the building from outside (or presenting inside activities to the outside). Either way, all those windows, future LCDs, and reams of I/O will require lots more Gizmo 2 embedded boards.

The Gizmo 2 costs $199 and is available from several retailers such as Element14. With Gerber schematics and all the board-focused software open source, it’s no wonder this x86 embedded board is attractive to gamers. With AMD’s G-Series APU onboard, the all-in-one HDK/SDK is an ideal choice for embedded designs—and those future gamers playing with the Gizmo 2 at the UK’s NVA.

BTW: The Who harkened from London, not Nottingham.