Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

PCI Express Switch: the “Power Strip” of IC Design

Need more PCIe channels in your next board design? Add a PCIe switch for more fanout.

Editor’s notes:

1. Despite the fact that Pericom Semiconductor sponsors this particular blog post, your author learns that he actually knows very little about the complexities of PCIe.

2. Blog updated 3-27-14 to correct the link to Pericom P/N PI7C9X2G303EL.

Perhaps you’re like me; power cords everywhere. Anyone who has more than one mobile doodad—from smartphone to iPad to Kindle and beyond—is familiar with the ever-present power strip.

An actual power strip from under my desk. Scary...

An actual power strip from under my desk. Scary…

The power strip is a modern version of the age-old extension cord: it expands one wall socket into three, five or more.  Assuming there’s enough juice (AC amperage) to power it all, the power strip meets our growing hunger for more consumer devices (or rather: their chargers).

 

And so it is with IC design. PCI Express Gen 2 has become the most common interoperable, on-board way to add peripherals such as SATA ports, CODECs, GPUs, WiFi chipsets, USB hubs and even legacy peripherals like UARTs. The wall socket analogy applies here too: most new CPUs, SoCs, MCUs or system controllers lack sufficient PCI Express (PCIe) ports for all the peripheral devices designers need. Plus, as IC geometries shrink, system controllers also have lower drive capability per PCIe port and signals degrade rather quickly.

The solution to these host controller problems is a PCIe switch to increase fanout by adding two, three, or even eight additional PCIe ports with ample per-lane current sourcing capability.

Any Port in a Storm?

While our computers and laptops strangle everything in sight with USB cables, inside those same embedded boxes it’s PCIe as the routing mechanism of choice. Just about any standalone peripheral a system designer could want is available with a PCIe interface. Even esoteric peripherals—such as 4K complex FFT, range-finding, or OFDM algorithm IP blocks—usually come with a PCIe 2.0 interface.

Too bad then that modern device/host controllers are painfully short on PCIe ports. I did a little Googling and found that if you choose an Intel or AMD CPU, you’re in good shape. A 4th Gen Intel Core i7 with Intel 8 Series Chipset has six PCIe 2.0 ports spread across 12 lanes. Wow. Similarly, an AMD A10 APU has four PCIe (1x as x4, or 4x as x1). But these are desktop/laptop processors and they’re not so common in embedded.

AMD’s new G-Series SoC for embedded is an APU with a boatload of peripherals and it’s got only one PCIe Gen 2 port (x4). As for Intel’s new Bay Trail-based Atom processors running the latest red-hot laptop/tablet 2:1’s:  I couldn’t find an external PCIe port on the block diagram.

Similarly…Qualcomm Snapdragon 800? Nvidia Tegra 4 or even the new K1? Datasheets on these devices are closely held for customers only but I found Developer References that point to at best one PCIe port. ARM-based Freescale processors such as the i.MX6, popular in set-top boxes from Comcast and others have one lone PCIe 2.0 port (Figure 1).

What to do if a designer wants to add more PCIe-based stuff?

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

‘Mo Fanout

A PCIe switch solves the one-to-many dilemma. Add in a redriver at the Tx and Rx end, and signal integrity problems over long traces and connectors all but disappear. Switches from companies like Pericom come in many flavors, from simple lane switches that are essentially PCIe muxes, to packet switches with intelligent routing functions.

One simple example of a Pericom PCIe switch is the PI7C9X2G303EL. This PCIe 2.0 three port/three lane switch has one x1 Up and two x1 Down and would add two ports to the i.MX6 shown in Figure 1. This particular device, aimed at those low power consumer doodads I mentioned earlier, boasts some advanced power saving modes and consumes under 0.7W.

Hook Me Up

Upon researching this for Pericom, I was surprised to learn of all the nuances and variables to consider with PCIe switches. I won’t cover them here, other than mentioning some of the designer’s challenges: PCIe Gen 1 vs Gen 2, data packet routing, latency, CRC verification (for QoS), TLP layer inspection, auto re-send, and so on.

It seems that PCIe switches seem to come in all flavors, from the simplest “power strip”, to essentially an intelligent router-on-a-chip. And for maximum interoperability, of them need to be compliant to the PCI-SIG specs as verified by a plugfest.

So if you’re an embedded designer, the solution to your PCIe fanout problem is adding a PCI Express switch. 

Intel’s Atom Secret Decoder Ring

Intel’s code names have gotten even more confusing with the new Atom processors.

It used to be that Intel had one code name for a processor family or process technology variant and then the part number SKUs followed easily from that. Haswell, for instance, is the 4th Generation Core family and the SKUs are 4-digit numbers starting with “4″. Ivy Bridge was 3rd Generation with 4-digit SKUs starting with “3″. And so on.

The Atom family has changed all of that and I’m confused as hell. Every time I see a new Atom SKU like “C2000″ or “E3800″ I have to do some research to figure out what the heck it actually is.  For some reason, Intel has split the Atom family into mobile, value desktop, microserver, and SoC versions. I’ve yet to find a comprehensive comparison chart that maps the code names (and former code names) to SKUs, markets, or other useful quick-look info.  The chart probably exists somewhere on the massive Intel website(s) ecosystem. Or in a PowerPoint presentation presented at an overseas conference. Or maybe not.

Here are a few hints, but I won’t even pretend that this is accurate or comprehensive.

The Artist Formerly Known as Bay Trail

Intel tries to demystify this whole naming bug-a-boo with a sort-of useful table called “Products (Formerly Bay Trail)”.

Intel attempts to de-mystify Atom's myriad code names and SKUs. I'm not sure it helps much.

Intel attempts to de-mystify Atom’s myriad code names and SKUs. I’m not sure it helps much because you have to drill down in each instance and there’s no market segment mapping (you’d need the Press Release to do that).

Bay Trail is the newest 22nm Atom designed for mobile, value desktop and the sorts of applications you’d expect Haswell’s baby brother to target. But there are also “Pentium” versions (J and N versions) and Celeron versions (N). Intel is targeting these at desktops, low-end laptops and other “value” platforms that can’t bear the price of Ivy Bridge or Haswell CPUs and chipsets.

Bay Trail Atoms also come in E and Z versions. E38xx was just launched and is called the “SoC” version, is based upon Bay Trail’s Silvermont microarchitecture, and has a TDP of 10W targeting embedded applications. The Z versions are aimed at tablets–exactly the target you’d expect for Intel’s flagship low power CPU.

Atoms to Protect and Server

Then there are the C2000 Atom versions. There are two flavors here, broken down by market segment. They’re all 22nm Atoms, but the C23xx, C25xx and C27xx SKUs target servers–more specifically, the microservers where ARM is making headway. Intel’s got a leadership position in servers with Sandy Bridge (Gen 2), Ivy Bridge (Gen 3), and Haswell (Gen 4) CPUs…plus all manner of heavy weight Xeon server CPUs. So it’s essential to offer a competitive product to whatever ARM and their partners might throw at servers (such as the multi-threaded A53 or single-threaded, deep pipeline A57).

To confuse matters further, there’s the C2000 Atoms targeted at communications platforms. Bizarrely, Intel also calls them–wait for it–C23xx, C25xx, and C27xx. Could they not have changed a few digits around to protect designers’ sanity if only to obviate the need to look them all up?

These Atoms aren’t Bay Trail at all–they’re the former “Avoton” coded Atoms and they’re definitely not aimed at mobile like Bay Trail. As I dug a bit deeper to try to figure this out, more code names like Rangeley popped up. Along with an Avoton block diagram that showed the same Bay Trail Silvermont core surrounded by Avoton I/O resources all labeled “Edisonville”. Avoton? Rangeley? Edisonville?

(Sigh.) At that point I decided to stick with the Bay Trail embedded versions for now and forget about the networking and communications versions before my head exploded. I’ll dig into this again with a fresh perspective and see if I can find a roadmap slide that makes this all clear.

If you can suggest some links–better yet, Intel charts–that stitch the Atom family into all of its permutations please send me a link. I’ll post your name with fanfare and gratitude.

In the meantime, be sure to always check www.ark.intel.com as your first SKU reference. It won’t map part numbers to the all important market segments, but it’s a good start.

The Soft(ware) Core of Qualcomm’s Internet of Everything Vision

Qualcomm supplements silicon with multiple software initiatives.

Qualcomm Snapdragon
Update 1: Added attribution to figures.
The numbers are huge: 50B connected devices; 7B smartphones to be sold by 2017; 1000x growth in data traffic within a few years. Underlying all of these devices in the Internet of Things…wait, the Internet of Everything…is Qualcomm. Shipping 700 million chipsets per year on top of a wildly successful IP creation business in cellular modem algorithms, plus being arguably #1 in 3G/4G/LTE with Snapdragon SoCs in smartphones, the company is now setting its sights on M2M connectivity. Qualcomm has perhaps more initiatives in IoT/IoE than any other vendor. Increasingly, those initiatives rely on the software necessary for the global M2M-driven IoT/IoE trend to take root.

Telit Wireless Devcon
Speaking at the Telit Wireless Devcon in San Jose on 15 October, Qualcomm VP Nakul Duggal of the Mobile Computing Division painted a picture showing the many pieces of the company’s strategy for the IoT/E. Besides the aforementioned arsenal of SnapDragon SoC and Gobi modem components, the company is bringing to bear Wi-Fi, Bluetooth, local radio (like NFC), GPS, communications stacks, and a vision for heterogeneous M2M device communication they call “dynamic proximal networking”. Qualcomm supplies myriad chipsets to Telit Wireless, and Telit rolls them into higher order modules upon which Telit’s customers add end-system value.

Over 8 Telit Wireless modules are based upon Qualcomm modems.

Over eight Telit Wireless modules are based upon Qualcomm modems, as presented at the Telit Wireless Devcon 2013.

But it all needs software in order to work. Here are a few of Qualcomm’s software initiatives.

Modem’s ARM and API Open to All
Many M2M nodes–think of a vending machine, or the much maligned connected coffee maker–don’t need a lot of intelligence to function. They collect data, perform limited functions, and send analytics and diagnostics to their remote M2M masters. Qualcomm’s Duggal says that the ARM processors in Qualcomm modems are powerful enough to perform that computational load. There’s no need for an additional CPU so the company is making available Java (including Java ME), Linux and ThreadX to run their 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm has also opened up the modem APIs and made available their IoT Connection Manager software to make it easier to write closer-to-the-metal code for modem. Duggal revealed that Qualcomm has partnered with Digi International in this effort as it applies to telematics market segments.

Leverage Smartphone Graphics
And some of those M2M devices on the IoE may have displays–simple UIs at first (like a vending machine)—but increasingly more complex as the device interacts with the consumer. A restaurant’s digital menu sign, for example, need not run a full blown PC and Windows Embedded operating system when a version of a Snapdragon SoC will do. After all, the 1080p HDMI graphics needs of an HTC One with S600 far outweigh those of a digital sign. Qualcomm’s graphics accelerators and signal processing algorithms can easily apply to display-enabled M2M devices. This applies doubly as more intelligence is pushed to the M2M node, alleviating the need to send reams of data up to the cloud for processing.

Digital 6th Sense: Context
Another area Duggal described as the “Digital 6th Sense” might be thought of as contextual computing. Smartphones or wearable fitness devices like Nike’s new FuelBand SE might react differently when they’re outside, at work, or in the home. More than just counting steps and communicating with an App, if the device knows where it is…including precisely where it is inside of a building…it can perform different functions. Qualcomm now includes the Atheros full RF spectrum of products including Bluetooth, Bluetooth LE, NFC, Wi-Fi and more. Software stacks for all of these enable connectivity, but code that meshes (no pun) Wi-Fi with GPS data provides outside and inside position information. Here, Qualcomm’s software melds myriad infrastructure technologies to provide inside positioning. A partnership with Cisco will bring the technology to consumer locations like shopping malls to coexist with Cisco’s Mobility Services Engine for location-based Apps.

Smart Start at Home
Finally, the smart home is another area ripe for innovation. Connected devices in the home range from the existing set-top box for entertainment, to that connected coffee pot, smart meter, Wi-Fi enabled Next thermostat and smoke/CO detector, home health and more. These disparate ecosystems, says Duggal, are similar only in their “heterogeneousness” in the home. That is: they were never designed to be interconnected. Qualcomm is taking their relationships with every smart meter manufacturer, their home gateway/backhaul designs, and their smartphone expertise, and rolling it into the new AllJoyn software effort.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener. Courtesy: Qualcomm and AllJoyn.org .

AllJoyn is an open source project that seeks to set a “common language for the Internet of Everything”. According to AllJoyn.org, the “dynamic proximal network” is created using a universal software framework that’s extremely lightweight. Qualcomm’s Duggal described the ability for a device to enumerate that it has a sensor, audio, display, or other I/O. Most importantly, Alljoyn is “bearer agnostic” across all leading OSes or connectivity mechanism.

AllJoyn connectivity diagram.

AllJoyn connectivity diagram. Courtesy: www.alljoyn.org .

If Qualcomm is to realize their vision of selling more modems and Snapdragon-like SoCs, making them play well together and exchange information is critical. AllJoyn is pretty new; a new Standard Client (3.4.0) was released on 9 October. It’s unclear to me right now how AllJoyn compares with Wind River’s MQTT-based M2M Intelligent Device Platform or Digi’s iDigi Cloud or Eurotech’s EveryWhere Device Framework.

Qualcomm’s on a Roll
With their leadership in RF modems and smartphone processors, Qualcomm is laser focused on the next big opportunity: the IoT/E. Making all of those M2M nodes actually do something useful will require software throughout the connected network. With so many software initiatives underway, Qualcomm is betting on their next big thing: the Internet of Everything. Software will be the company’s next major “killer app”.

Intel Gets Smart with Smartphones

The 15 year anniversary of Intel’s Developers Forum kicked off with a somewhat predictable keynote by Dadi Perlmutter, EVP/GM Intel Architecture Group (Figure 1). We’re so used to Intel hitting it out of the park that the astounding messages bordered on ho-hum: reminding the audience of the pervasiveness of mobile computing; the morphing of the (not-yet-successful) Ultrabook segment into tablets, slates, and convertible variants; Windows8 and touch, gesture, and voice computing; next year’s Haswell 22nm microarchitecture; and a brief mention of future Atom variants. What is 100 percent certain is that Intel’s server (Xeon), desktop and laptop (3rd and soon 4th generation Core) processors will be amazing technology machines that are better than anything available today. And you’ll want one just as soon as they begin shipping in Q12013 because they’ll be cool. Literally.

Figure 1: Intel’s Dadi Perlmutter, EVP/GM Intel Architecture Group opens Day 1 of IDF 2012.

But what was most interesting is what Mr. Perlmutter didn’t say that the whole audience wanted to hear: What’s Intel’s roadmap in low-power, portable devices like smartphones and tablets? He offered only that the “First Wave” of Intel Inside smartphones is now available (Figure 2), with more on the way.

Turns out Intel is like an iceberg with only a bit showing above the waterline. The company merged the Core and Atom design teams this year, emphasizing both the need to focus on low power and SoC solutions, and to solidify the Haswell architecture’s “roadmap-ability” to scale up to server-class performance, while down to low-leakage, high-K power-sipping sleep modes. Five cell phone wins have been announced, all based upon the SoC Atom Z2460 1.6 GHz Medfield platform (Saltwell core): Lenovo, ZTE, Megafon, LAVA and Orange. They all run Android 4.0 Ice Cream Sandwich – one revision behind the latest Jelly Bean – except for the Lava which runs 2.3. According to an Intel spokesperson, all are loosely based upon the company’s Smartphone Reference Design , but the Lava most closely resembles the original Intel specs.

Figure 2: Intel announced five smartphone wins at IDF, all based upon the Medfield Atom SoC and Saltwell core.

The Lava XOLO X900, sold in India, uses the Z2460 with Hyper-threading, has 16 GB of NV storage and 1 GB of RAM, and drives a 4.03-inch screen at 1024 x 600 with Intel’s 400 MHz Media Graphics Accelerator running OpenGL ES 2.0 with OpenVG 1.1 support. Its 1460 mAh battery is on the small side but similar to the iPhone 4s (allegedly 1432 mAh), but “should last 6-8 hours”. The China-destined Lenovo, on the other hand uses the same Atom SoC and graphics chip, but the 4.5-inch screen displays 720p content. The phone uses a 1900 mAh battery.

Figure 3: Who knew Intel made modems? The family – available in multiple form factors – originally came from the 2010 Infineon Wireless acquisition.

The other Intel surprise was their wireless modem family (Figure 3), spawned by the 2010 acquisition of Infineon’s wireless group. The company offers modem ICs, dongles, and cores for integration into their own (future) SoCs. The XMM family has a variety of flavors; all five of the smartphones displayed at IDF use Intel’s XMM 6260 HSPA+ 21 Mbits/s down/5.8 Mbit/s up modem. Designed for 2G/3G networks, multimode “Penta-band” support works with multiple worldwide standards: GSM, GPRS, and EDGE (850/900/1800/1900); and HSPA (850/900/1700/1900/2100). These are mixed signal solutions, combining digital and analog baseband in what Intel calls X-GOLD. No small technical feat.

Intel also has a roadmap strategy for “feature phones” (those candy bar phones popularized by Nokia) for the huge portion of the non-connected world that sees no need for a smartphone. Atom SoCs and modems are available for this slice of the mobile market, too.

So the part of the iceberg floating below the water that is publicly visible – Medfield SoCs and mixed signal 3G modems – is hugely impressive and clearly shows Intel’s commitment to low power mobile devices. And these are only the “First Wave”. Clearly Intel knows how to integrate smartphone peripherals, perform baseband signal processing, accelerate and decode/transcode HD graphics, and make a pretty decent low-power smartphone. With Intel writing the Intel Architecture BSP and native code on Android for Google (one of last year’s IDF announcements), the company is well positioned to smartly get into the smartphone game. The Haswell microarchitecture should ratchet down power by 20 times at the system level, said Permutter. We’re anxious to see it applied to the Atom roadmap in the Silvermont microarchitecture.

It’s about time.