Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

AMD’s “Beefy” APUs Bulk Up Thin Clients for HP, Samsung

There are times when a tablet is too light, and a full desktop too much. The answer? A thin client PC powered by an AMD APU.

Note: this blog is sponsored by AMD.

A desire to remotely access my Mac and Windows machines from somewhere else got me thinking about thin client architectures. A thin “client” machine has sufficient processing for local storage and display—plus keyboard, mouse and other I/O—and is remotely connected to a more beefy “host” elsewhere. The host may be in the cloud or merely somewhere else on a LAN, sometimes intentionally inaccessible for security reasons.

Thin client architectures—or just “thin clients”—find utility in call centers, kiosks, hospitals, “smart” monitors and TVs, military command posts and other multi-user, virtualized installations. At times they’ve been characterized as low performance or limited in functionality, but that’s changing quickly.

They’re getting additional processing and graphics capability thanks to AMD’s G-Series and A-Series Accelerated Processing Units (APUs). By some analysts, AMD is number one in thin clients and the company keeps winning designs with its highly integrated x86 plus Radeon graphics SoCs: most recently with HP and Samsung.

HP’s t420 and mt245 Thin Clients

HP’s ENERGY STAR certified t420 is a fanless thin client for call centers, Desktop-as-a-service and remote kiosk environments (Figure 1). Intended to mount on the back of a monitor such as the company’s ProDisplays (like you see at the doctor’s office), the unit runs HP’s ThinPro 32 or Smart Zero Core 32 operating system, has either 802.11n Wi-Fi or Gigabit Ethernet, 8 GB of Flash and 2 GB of DDR3L SDRAM.

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

USB ports for keyboard and mouse supplement the t420’s dual display capability (DVI-D  and VGA)—made possible by AMD’s dual-core GX-209JA running at 1 GHz.

Says AMD’s Scott Aylor, corporate vice president and general manager, AMD Embedded Solutions: “The AMD Embedded G-Series SoC couples high performance compute and graphics capability in a highly integrated low power design. We are excited to see innovative solutions like the HP t420 leverage our unique technologies to serve a broad range of markets which require the security, reliability and low total cost of ownership offered by thin clients.”

The whole HP thin client consumes a mere 45W and according to StorageReview.com, will retail for $239.

Along the lines of a lightweight mobile experience, HP has also chosen AMD for their mt245 Mobile Thin Client (Figure 2). The thin client “cloud computer” resembles a 14-inch (1366 x 768 resolution) laptop with up to 4GB of SDRAM and a 16 GB SSD, the unit runs Windows Embedded Standard 7P 64 on AMD’s quad core A6-6310 APU with Radeon R4 GPU. There are three USB ports, 1 VGA and 1 HDMI, plus Ethernet and optional Wi-Fi.

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Like the t420, the mt245 consumes a mere 45W and is intended for employee mobility but is configured for a thin client environment. AMD’s director of thin client product management, Stephen Turnbull says the mt245 targets “a whole range of markets, including education and healthcare.”

At the core of this machine, pun intended, is the Radeon GPU that provides heavy-lifting graphics performance. The mt245 can not only take advantage of virtualized cloud computing, but has local moxie to perform graphics-intensive applications like 3D rendering. Healthcare workers might, for example, examine ultrasound images. Factory technicians could pull up assembly drawings, then rotate them in CAD-like software applications.

Samsung Cloud Displays

An important part of Samsung’s displays business involves “smart” displays, monitors and televisions. Connected to the cloud or operating autonomously as a panel PC, many Samsung displays need local processing such as that provided by AMD’s APUs.

Samsung’s recently announced (June 17, 2015) 21.5-inch TC222W and 23.6-inch TC242W also use AMD G-Series devices in thin client architectures. The dual core 2.2 GHz GX222 with Radeon HD6290 powers both displays at 1920 x 1080 (HD) and provides six USB ports, Ethernet, and runs Windows Embedded 7 out of 4GB of RAM and 32 GB of SSD.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Said Seog-Gi Kim, senior vice president, Visual Display Business, Samsung Electronics, “Samsung’s powerful Windows Thin Client Cloud displays combine professional, ergonomic design with advanced thin-client technology.” The displays rely on the company’s Virtual Desktop Infrastructure (VDI) through a centrally managed data center that increases data security and control (Figure 3). Applications include education, business, healthcare, hospitality or any environment that requires virtualized security with excellent local processing and graphics.

Key to the design wins is the performance density of the G-Series APUs, coupled with legacy x86 software interoperability. The APUs–for both HP and Samsung–add more beef to thin clients.

 

AMD on a Design Win Roll: GE and Samsung, Recent Examples

AMD is announcing several design wins per week as second-gen APUs show promise.

Note: AMD is a sponsor of this blog.

I follow many companies on Twitter, but lately it’s AMD that’s tweeting the loudest with weekly design wins. The company’s APUs—accelerated processing units—seem to be gaining traction in systems where PC functionality with game-like  graphics is critical. Core to both of these—pun intended!—is the x86 ISA with its PC compatibility and rich software ecosystem.

Here’s a look at two of AMD’s recent design wins, one for an R-Series and the other for the all-in-one G-Series APU.

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung’s “set-back box” adds high-res graphics and PC functions to their digital signage displays. (Courtesy: Samsung.)

Samsung Digital Signs on to AMD

In April Samsung and AMD announced that AMD’s second-gen embedded R-Series APU, previously codenamed “Bald Eagle” is powering Samsung’s latest set-back box (SBB) digital media players. I had no idea what a set-back box is until I looked it up.

Turns out it’s a slim embedded “pizza box” computer 310mm x 219mm x 32mm (12.2in x 8.6in x 1.3in) that’s inserted into the back (“set-back”) of a Samsung Large Format Display (LFD). These industrial-grade LFDs range in size from 32in to 82in and are used in digital signage applications.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

What makes them so compelling is the reason they chose AMD’s R-Series APU. The SBB is a complete networked PC, alleviating the need for a separate box; they’re remotely controlled by Samsung’s MagicInfo software that allows up to 192 displays to be linked with same- or stitched-display information.

That is, one can build a video wall where the image is split across the displays—relying on AMD’s EyeFinity graphics feature—or content can be streamed across networked displays depending upon the retailer’s desired effect. Key to Samsung’s selling differentiation is remote management, RS232 control, and network-based self-diagnostics and active alert notification of problems.

Samsung is using the RX-425BB APU with integrated AMD Radeon R6 GPU. Per the datasheet, this version has a 35W TDP, 4 x86 cores and 6 GPU cores @ 654 MHz, is based on AMD’s latest “Steamroller” 64-bit CPU and Embedded Radeon E8860 discrete GPU. Each R-Series APU can drive four 3D, 4K, or HD displays (up to 4096 x 2160 pixels) while running DirectX 11.1, OpenGL 2.4 and AMD’s Mantle gaming SDK.

As neat as all of this is—it’s a super high-end embedded LAN-party “gaming” PC system, afterall—it’s the support for the latest HSA Foundation specs that makes the R-Series (and companion G-Series SOC) equally compelling for deeply embedded applications.  HSA allows mixed CPU and GPU computation which is especially useful in industrial control with its combination of general purpose, machine control, and display requirements.

GE Chooses AMD SOC for SFF

The second design win for AMD was back in February and it wasn’t broadcast widely: I stumbled across it while working on a sponsored piece for GE Intelligent Platforms (Disclosure: GE-IP is a sponsor of this blog.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

The AMD G-Series is now a monolithic, single-chip SOC that combines x86 CPU and Radeon graphics. (Courtesy: GE; YouTube.)

Used in a rugged, COM Express industrial controller, the AMD G-Series SOC met GE’s needs for low power and all-in-one processing, said Tommy Swigart, Global Product Manager at GE Intelligent Platforms. The “Jaguar” core in the SOC can sip as little as 5W TDP, yet still offers 3x PCIe, 2x GigE, 4x serial, plus HD audio and video, 10 USB (including 2x USB 3.0) and 2 SATA interfaces. What a Swiss Army knife of capability it is.

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE chose AMD’s G-Series APU for a rugged COM Express module for use in GE’s Industrial Internet. (Courtesy: GE Intelligent Platforms, YouTube.)

GE’s going all-in with the GE Industrial Internet, the company’s version of the IoT. Since the company is so diversified, GE can wring cost efficiencies for its customers by predicting aircraft maintenance, reducing energy in office HVAC installations, and interconnecting telemetry from locomotives to reduce track traffic and downtime. AMD’s G-Series APU brings computation, graphics, and bundles of I/O in a single-chip SOC—ideal for use in GE’s rugged SFF.

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

GE’s Industrial Internet runs on AMD’s G-Series APU. (Courtesy: GE; YouTube.)

 

New HSA Spec Legitimizes AMD’s CPU+GPU Approach

After nearly 3 years since the formation of the Heterogeneous System Architecture (HSA) Foundation, the consortium releases 1.0 version of the Architecture Spec, Programmer’s Reference Manual, Runtime Specification and a Conformance Plan.

Note: This blog is sponsored by AMD.

HSA banner

 

UPDATE 3/17/15: Added Imagination Technologies as one of the HSA founders. C2

No one doubts the wisdom of AMD’s Accelerated Processing Unit (APU) approach that combines x86 CPU with a Radeon graphic GPU. Afterall, one SoC does it all—makes CPU decisions and drives multiple screens, right?

True. Both AMD’s G-Series and the AMD R-Series do all that, and more. But that misses the point.

In laptops this is how one uses the APU, but in embedded applications—like the IoT of the future that’s increasingly relying on high performance embedded computing (HPEC) at the network’s edge—the GPU functions as a coprocessor. CPU + GPGPU (general purpose graphics processor unit) is a powerful combination of decision-making plus parallel/algorithm processing that does local, at-the-node processing, reducing the burden on the cloud. This, according to AMD, is how the IoT will reach tens of billions of units so quickly.

Trouble is, HPEC programming is difficult. Coding the GPU requires a “ninja programmer”, as quipped AMD’s VP of embedded Scott Aylor during his keynote at this year’s Embedded World Conference in Germany. (Video of the keynote is here.) Worse still, capitalizing on the CPU + GPGPU combination requires passing data between the two architectures which don’t share a unified memory architecture. (It’s not that AMD’s APU couldn’t be designed that way; rather, the processors require different memory architectures for maximum performance. In short: they’re different for a reason.)

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD’s Scott Aylor giving keynote speech at Embedded World, 2015. His message: some IoT nodes demand high-performance heterogeneous computing at the edge.

AMD realized this limitation years ago and in 2012 catalyzed the HSA Foundation with several companies including ARM, Texas Instruments, Imagination Technology, MediaTek, Qualcomm, Samsung and others. The goal was to create a set of specifications that define heterogeneous hardware architectures but also create an HPEC programming paradigm for CPU, GPU, DSP and other compute elements. Collectively, the goal was to make designing, programming, and power optimizing easy for heterogeneous SoCs (Figure).

Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

The HSA Foundation’s goals are realized by making the coder’s job easier using tools—such as an HSA version LLVM open source compiler—that integrates multiple cores’ ISAs. (Courtesy: HSA Foundation; all rights reserved.) Heterogeneous systems architecture (HSA) specifications version 1.0 by the HSA Foundation, March 2015.

After three years of work, the HSA Foundation just released their specifications at version 1.0:

  • HSA System Architecture Spec: defines H/W, OS requirements, memory model (important!), signaling paradigm, and fault handling.
  • Programmers Reference Guide: essentially a virtual ISA for parallel computing, defines an output format for HSA language compilers.
  • HSA Runtime Spec: is an application library for running HSA applications; defines INIT, user queues, memory management.

With HSA, the magic really does happen under the hood where the devil’s in the details. For example, the HSA version LLVM open source compiler creates a vendor-agnostic HSA intermediate language (HSAIL) that’s essentially a low-level VM. From there, “finalizers” compile into vendor-specific ISAs such as AMD or Qualcomm Snapdragon. It’s at this point that low-level libraries can be added for specific silicon implementations (such as VSIPL for vector math). This programming model uses vendor-specific tools but allows novice programmers to start in C++ but end up with optimized, performance-oriented, and low-power efficient code for the heterogeneous combination of CPU+GPU or DSP.

There are currently 43 companies involved with HSA, 16 universities, and three working groups (and they’re already working on version 1.1). Look at the participants, think of their market positions, and you’ll see they have a vested interest in making this a success.

In AMD’s case, as the only x86 and ARM + GPU APU supplier to the embedded market, the company sees even bigger successes as more embedded applications leverage heterogeneous parallel processing.

One example where HSA could be leveraged, said Phil Rogers, President of the HSA Foundation, is for multi-party video chatting. An HSA-compliant heterogeneous architecture would allow the processors to work in a single (virtual) memory pool and avoid the multiple data set copies—and processor churn—prevalent in current programming models.

With key industry players supporting HSA including AMD, ARM, Imagination Technologies, Samsung, Qualcomm, MediaTek and others, a lot of x86, ARM, and MIPS-based SoCs are likely to be compliant with the specification. That should kick off a bunch of interesting software development leading to a new wave of high performance applications.

The Secret World of USB Charging

There’s a whole set of USB charging specs you’ve probably never heard of because big-battery smartphones, tablets and 2:1’s demand shorter charge times.

Editor’s note: this particular blog posting is sponsored by Pericom Semiconductor.  

$5 chargers useNow that you can buy $5 USB chargers everywhere (mains- and cigarette lighter-powered), it’s tempting to think of them like LED flashlights: cheap commodity throw-aways. And you would’ve been right…until now.

My recent purchase of an Asus T100 Transformer Windows 8.1/Intel Atom 2:1 tablet hybrid forced me to dig into USB charging (Figure).

My own Asus T100 Transformer Book has a “unique” USB charging profile.  (Courtesy: Asus.)

My own Asus T100 Transformer Book has a “unique” USB charging profile.
(Courtesy: Asus.)

This device is fabulous with its convenient micro USB charging port with OTG support. No bulky wall wart to lug around. But it refuses to charge normally from any charger+cable except for the (too short) one that came with it.

My plethora of USB chargers, adapters, powered hubs and more will only trickle charge the T100 and take tens of hours. And it’s not just the device’s 2.0A current requirement, either. There’s something more going on.

Just Say “Charge it!”

The USB Innovators Forum (USB-IF) has a whole power delivery strategy with goals as shown below. Simply stated, USB is now flexible enough to provide the right amount of power to either end of the USB cable.

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

There’s even a USB Battery Charging (UBC) compliance specification called “BC1.2” to make sure devices follow the rules. Some of the new power profiles are shown below:

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

The reason for UBC is that newer devices like Apple’s iPad, Samsung’s Galaxy S5 and Galaxy Tab devices–and quite possibly my Asus T100 2:1–consume more current and sometimes have the ability to source power to the host device. UBC flexibly delivers the right amount of power and can avoid charger waste.

Communications protocols between the battery’s MCU and the charger’s MCU know how to properly charge a 3000mAh to 10,000mAh battery. Battery chemistry matters, too. As does watching out for heat and thermal runaway; some USB charger ICs take these factors into account.

Apple, ever the trend-setter (and master of bespoke specifications) created their own proprietary fast charging profiles called Apple 1A, 2A and now 2.4A. The Chinese telecom industry has created their own called YD/T1591-2009. Other suppliers of high-volume devices have or are working on bespoke charging profiles.

Fast, proper rate charging from Apple, Samsung and others is essential as harried consumers increasingly rely on mobile devices more than laptops. Refer to my complaint above RE: my Asus T100.

Who has time to wait overnight?!

USB Devices Available

Pericom Semiconductor, who is sponsoring this particular blog posting, has been an innovator in USB charging devices since 2007. With a growing assurance list of charge-compatible consumer products, the company has a broad portfolio of USB ICs.

Take the automotive-grade PI5USB8000Q, for instance. Designed for the digital car, this fast charger supports all of the USB-IF BC modes per BC1.2, Apple 1A and 2A, and the Chinese telecom standard. The IC powers down when there’s no load to save the car’s own battery, and can automatically detect the communication language to enable the proper charging profile (Figure). Pretty cool, eh?

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

As For My Asus 2:1?

Sadly, I can’t figure out how the T100 “talks” with its charger, or if there’s something special about its micro USB cable. So I’m stuck.

But if you’re designing a USB charger, a USB device, or just powering one, Pericom’s got you covered. That’s a secret to get all charged up about.

USB Charging Becoming More Common than a 110V Socket?

USB is everywhere, and increasingly, it is the “socket” through which we charge all of our stuff. Microchip wants to be inside all those “sockets”.

[UPDATE: 8/13/13 11:45pm PDT. Changed from 12V to 12W; corrected sentence accordingly. C2]

The common 110VAC wall socket. (Courtesy: commons.wikimedia.org .)

The common 110VAC wall socket. (Courtesy: commons.wikimedia.org .)

I noticed that Home Depot now carries replacement 110VAC sockets with USB ports built in. What a handy way to avoid fumbling around for the elusive USB charger “wall wart”, especially since most non-Apple portable devices now sport micro USB charging ports and accept cheap $1 USB cables.

As USB proliferates to nearly every battery-operated consumer device, I’m wondering when the number of USB ports (2.0 and 3.0 SuperSpeed) will exceed the number of 110VAC wall sockets. The number of 110V sockets in the typical home is static (remodeling and extension cords notwithstanding), but the number of USB-charged digital cameras, Kindles, Samsung Galaxy IV smartphones, tablets, hubs and so on is increasing.

The common USB 2.0 connector. (Courtesy: commons.wikimedia.org .)

The common USB 2.0 connector. (Courtesy: commons.wikimedia.org .)

With this USB onslaught around us, it’s no wonder that more IC suppliers are focusing on USB silicon. And not just for USB 2.0 480 Mbits/s connectivity.

Case in point: Microchip’s latest USB port power controllers–the UCS1001-3 and -4, and the UCS1002-2. The new family not only supports active cables such as Apple’s irritatingly proprietary Lightning connector, but also 12W charging, enabling higher current up to 2.5A, plus priority charging capabilities.

 

Microchip aims to charge any USB device up to 12W. (Courtesy: Microchip .)

Microchip aims to charge any USB device up to 12W. (Courtesy: Microchip.)

A built-in sensor can report the amount of current being sourced so the port’s host system will know the difference between say, a Nexus 7 or an iPad (which require over 2A charging), and a digital camera battery or child’s toy. The family can also support future USB charging profiles, since USB is essentially becoming not only a 3.0 SuperSpeed connection channel (currently 5 Gbits/s), but an intelligent battery charger. Microchip is so bullish on USB, that they’ve even created two Eval Boards,

One of two eval boards, this one allows designers to explore programmable charging profiles. (Courtesy: Microchip.)

One of two eval boards, this one allows designers to explore programmable charging profiles. (Courtesy: Microchip.)

one of which emphasizes USB’s programmable charging capabilities.

If USB does become more ubiquitous than the common 110VAC wall socket, Microchip hopes they’ll be the equivalent of “Intel Inside” every USB host device. Yeah, I think it could happen.

Intel’s Atom Roadmap Makes Smartphone Headway

After being blasted by users and pundits over the lack of “low power” in the Atom product line, new architecture and design wins show Intel’s making progress.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

A 10-second Google search on “Intel AND smartphone” reveals endless pundit comments on how Intel hasn’t been winning enough in the low power, smartphone and tablet markets.  Business publications wax endlessly on the need for Intel’s new CEO Brian Krzanich to make major changes in company strategy, direction, and executive management in order to decisively win in the portable market. Indications are that Krzanich is shaking things up, and pronto.

Forecasts by IDC (June 2013) and reported by CNET.com (http://news.cnet.com/8301-1035_3-57588471-94/shipments-of-smartphones-tablets-and-oh-yes-pcs-to-top-1.7b/) peg the PC+smartphone+tablet TAM at 1.7B units by 2014, of which 82 percent (1.4B units, $500M USD) are low power tablets and smart phones. And until recently, I’ve counted only six or so public wins for Intel devices in this market (all based upon the Atom Medfield SoC with Saltwell ISA I wrote about at IDF 2012). Not nearly enough for the company to remain the market leader while capitalizing on its world-leading tri-gate 3D fab technology.

Behold the Atom, Again

Fortunately, things are starting to change quickly. In June, Samsung announced that the Galaxy Tab 3 10.1-inch SKU would be powered by Intel’s Z2560 “Clover Trail+” Atom SoC running at 1.2GHz.  According to PC Magazine, “it’ll be the first Intel Android device released in the U.S.” (http://www.pcmag.com/article2/0,2817,2420726,00.asp)and it complements other Galaxy Tab 3 offerings with competing processors. The 7-inch SKU uses a dual-core Marvell chip running Android 4.1, while the 8-inch SKU uses Samsung’s own Exynos dual-core Cortex-A9 ARM chip running Android 4.2. The Atom Z2560 also runs Android 4.2 on the 10.1-incher. Too bad Intel couldn’t have won all three sockets, especially since Intel’s previous lack of LTE cellular support has been solved by the company’s new XMM 7160 4G LTE chip, and supplemented by new GPS/GNSS silicon and IP from Intel’s ST-Ericsson navigation chip acquisition.

The Z2560 Samsung chose is one of three “Clover Trail+” platform SKUs (Z2760, Z2580, Z2560) formerly known merely as “Cloverview” when the dual-core, Saltwell-based, 32-nm Atom SoCs were leaked in Fall 2012. The Intel alphabet soup starts getting confusing because the Atom roadmap looks like rush hour traffic feeding out of Boston’s Sumner tunnel. It’s being pushed into netbooks (for maybe another quarter or two); value laptops and convertible tablets as standalone CPUs; smartphones and tablets as SoCs; and soon into the data center to compete against ARM’s onslaught there, too.

Clover Trail+ replaces Intel’s Medfield smartphone offering and was announced at February’s MWC 2013. According to Anandtech.com (thank you, guys!) Intel’s aforementioned design wins with Atom used the 32nm Medfield SoC for smartphones. Clover Trail is still at 32nm using the Saltwell microarchitecture but has targeted Windows 8 tablets, while Clover Trail+ targets only smartphones and non-Windows Tablets. That explains the Samsung Galaxy Tab 3 10.1-inch design win. The datasheet for Clover Trail+ is here, and shows a dual-core SoC with multiple video CODECs, integrated 2D/3D graphics, on-board crypto, multiple multimedia engines such as Intel Smart Sound, and it’s optimized for Android and presumably, Intel/Samsung’s very own HTML5-based Tizen OS (Figure 1).

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

I was unable to find meaningful power consumption numbers for Clover Trail+, but it’s 32nm geometry compares favorably to ARM’s Cortex-A15 28nm geometry so Intel should be in the ballpark (vs Medfield’s 45nm). Still, the market wonders if Intel finally has the chops to compete. At least it’s getting much, much closer–especially once the on-board graphics performance gets factored into the picture compared to ARM’s lack thereof (for now).

Silvermont and Bay Trail and…Many More Too Hard to Remember

But Intel knows they’ve got more work to do to compete against Qualcomm’s home-grown Krait ARM-based ISA, some nVidia offerings, and Samsung’s own in-house designs. Atom will soon be moving to 22nm and the next microarchitecture is called Silvermont. Intel is finally putting power curves up on the screen, and at product launch I’m hopeful there will be actual Watt numbers shown, too.

For example, Intel is showing off Silvermont’s “industry-leading performance-per-Watt efficiency” (Figure 2). Press data from Intel says the architecture will offer 3x peak performance, or 5x lower power compared to the Clover Trail+ Saltwell microarchitecture. More code names to track: the quad-core Bay Trail SoC for 2013 holiday tablets; Merrifield with increased performance and battery life; and finally Avoton that provides 64-bit energy efficiency for micro servers and boasts ECC, Intel VT and possibly vPro and other security features. Avoton will go head-to-head with ARM in the data center where Intel can’t afford to lose any ground.

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from "Device to Data Center", says Intel. (Courtesy: Intel.)

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from “Device to Data Center”, says Intel. (Courtesy: Intel.)

Oh Yeah? Who’s Faster Now?

As Intel steps up its game because it has to win or else, the competition is not sitting still. ARM licensees have begun shipping big.LITTLE SoCs, and the company has announced new graphics, DSP, and mid-range cores. (Read Jeff Bier and BDTi’s excellent recent ARM roadmap overview here.)

A recent report by ABI Research (June 2013) tantalized (or more appropriately galvanized) the embedded and smartphone markets with the headline “Intel Apps Processor Outperforms NVIDA, Qualcomm, Samsung”. In comparison tests, ABI Research VP of engineering Jim Mielke noted that that Intel Atom Z2580  ”not only outperformed the competition in performance but it did so with up to half the current drain.”

The embedded market didn’t necessarily agree with the results, and UBM Tech/EETimes published extensive readers’ comments with colorful opinions.  On a more objective note, Qualcomm launched its own salvo as we went to press, predicting “you’ll see a whole bunch of tablets based upon the Snapdragon 800 in the market this year,” said Raj Talluri, SVP at Qualcomm, as reported by Bloomberg Businessweek.

Qualcomm  has made its Snapdragon product line more user-friendly and appears to be readying the line for general embedded market sales in Snapdragon 200, 400, 600, and “premium” 800 SKU versions. The company has made available development tools (mydragonboard.org/dev-tools) and is selling COM-like Dragonboard modules through partners such as Intrinsyc.

Intel Still Inside

It’s looking like a sure thing that Intel will finally have competitive silicon to challenge ARM-based SoCs in the market that really matters: mobile, portable, and handheld. 22nm Atom offerings are getting power-competitive, and the game will change to an overall system integration and software efficiency exercise.

Intel has for the past five years been emphasizing a holistic all-system view of power and performance. Their work with Microsoft has wrung out inefficiencies in Windows and capitalizes on microarchitecture advantages in desktop Ivy Bridge and Haswell CPUs. Security is becoming important in all markets, and Intel is already there with built-in hardware, firmware, and software (through McAfee and Wind River) advantages. So too has the company radically improved graphics performance in Haswell and Clover Trail+ Atom SoCs…maybe not to the level of AMD’s APUs, but absolutely competitive with most ARM-based competitors.

And finally, Intel has hedged its bets in Android and HTML5. They are on record as writing more Android code (for and with Google) than any other company, and they’ve migrated past MeeGo failures to the might-be-successful HTML5-based Tizen OS which Samsung is using in select handsets.

As I’ve said many times, Intel may be slow to get it…but it’s never good to bet against them in the long run. We’ll have to see how this plays out.

HTML5 Is What’s Needed To Rapidly Develop IVI Automotive Apps

HTML5 logo

Car manufacturers know that in-car technology like navigation systems sells cars. The pace of the smartphone movement is impacting the painfully slow speed with which automotive manufacturers develop new cars and tech features. Consumers trade out their phones every 2 years, but a two year old car is still considered nearly “new” by Kelly Blue Book. So how can the auto OEMs satisfy consumers’ tastes for updated, red-hot in-vehicle infotainment (IVI) systems and add-on Apps?

Elektrobit speaks about HTML5, IVI, and HMI for automotive markets

Automotive software supplier Elektrobit thinks HTML5 is the answer. Coincidentally, so does RIM’s QNX division, along with Intel.  QNX supplies “CAR 2″ software to several auto OEMs, and Intel is behind Tizen, an HTML5-based competitor to Android.  While Samsung has endorsed Tizen for a handful of smartphones, Intel has publicly stated that Tizen is also targeting automotive IVI systems as I wrote about here.

At a webinar today (5 March 2013) hosted by Automotive World magazine, Elektrobit’s VP of Automotive Rainer Holve, argued that HTML5 is the perfect language in which to develop and deploy the fast-changing IVI HMI software. Most importantly, the car’s core “native” IVI functions should stay separate and subject to safety-critical coding practices.

By partitioning the IVI software in this manner, the two ecosystems are decoupled and can run on their own market- and OEM-driven schedules.  This means that native IVI–like GPS navigation, audio, HVAC, or OBDII diagnostic information like fuel consumption–can be developed slowly and methodically on the typical 2-5+ year automobile OEM cycle.

But the faster moving, consumer smartphone inspired IVI portion, and its fast moving add-on Apps ecosystem, can move very, very quickly. This allows consumers to refresh not only the Apps, but alows the OEMs to upgrade the entire HMI experience every few years without having to replace the whole car.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

While the OEMs would love for an HMI refresh to force the consumer to replace the car every two years, it’s not going to happen. HMTL5 is a reasonable alternative and they know it. According to Elektrobit, Chrysler, GM, and Jaguar/Land Rover (JLR) have already started projects with HTML5.

HTML5 is an “evolution and cleanup of previous HTML standards,” said Elektrobit’s Holve, and is composed of HTML+CSS+JavaScript, along with new features for A/V, 2D graphics canvas, a 3D API, support for hardware acceleration, and much more.  HTML5 is based upon open standards and is supported by Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C). Independently, W3C is working on a standardized API for JavaScript, which makes the HTML5 value proposition even sweeter.

Besides decoupling the HMI software from the “core” HMI functions, HTML5 would allow third-party Apps developers to swiftly write and deploy applications for IVI systems. Besides Internet connectivity itself, this is the one IVI feature that consumers demand: a choice of what Apps to add whenever they so choose. And since every automobile OEM will have to certify an App for safe in-vehicle use with their particular system, HTML5 allows App developers to create one core App that can be easily modified for multiple manufacturers and their myriad (and differentiated) vehicle models.  In short: HTML5 makes things easier for everyone, yet still allows a robust third-party market to flourish.

It’s important to note how this is both similar to, and differs from, the current IVI strategy of many OEMs that rely solely on the smartphone for Apps. Chevrolet, Peugeot, Renault, Toyota and v others tether the smartphone to the IVI system and “mirror” the phone’s Apps on the screen (see my blog on Mirroring). This allows the wildly robust iOS and Android App ecosystems into the car (and soon RIM/Blackberry and Windows 8 Phone), but it comes at a price.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

In this scenario, the auto OEM must certify every App individually for use in their vehicle to assure safety or that critical car systems can’t be hacked or compromised. Or, the OEM can allow all Apps to run and hope for the best. One hopes a rogue App doesn’t access the CAN bus and apply the ABS or electric steering.

HTML5, on the other hand, gently forces developers to create Apps destined for IVI systems, but adds only a slight burden on them to make minor changes for each manufacturer’s certification. In this way they’re not barred from the car indiscriminately, but can develop a business of IVI apps separate from their smartphone iOS, Android and other Apps.

Intel's Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Intel’s Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Will HTML5 be successful? Is it the right answer for the rabid consumer’s taste for car tech, while still giving the auto manufacturer the safety and security they’re required to offer by law? I was skeptical about Tizen until Samsung’s announcements at Mobile World Congress 2013 last month. With Tizen pushing HTML5 for “openness”, it may just gain traction in automotive, too.

Watch this space. We’ll keep you updated.

“Mirror, Mira” on the Car’s IVI Screen: Two Different Standards?

You might be hearing about a new technology called MirrorLink that mimics your smartphone’s screen on the larger nav screen in your “connected car”. Or, you might be following the news on Miracast, a more open standard now baked into Android that offers Apple AirPlay-like features to stream smartphone content to devices like connected TVs.

You’d be forgiven if you think the two similarly-named standards are trying to accomplish the same thing. I didn’t understand it either, so I did some digging. Here’s what I found out.

The Smart, Connected Car
When I attended the Paris Auto Show last Fall specifically to investigate in-vehicle infotainment (IVI) trends for the Barr Group under contract to Intel, I got spun up “right quick” on all manner of IVI. From BMW’s iDrive to Chevrolet’s MyLink, the connected car is here. In fact, it’s one of the biggest trends spotted at last week’s 2013 CES in Las Vegas. MirrorLink is being designed into lots of new cars.

BMW's iDrive IVI uses a native system and doesn't rely on smartphone mirroring.

BMW’s iDrive IVI uses a native system and doesn’t rely on smartphone mirroring. (Courtesy of BMW.)

The biggest question faced by every auto manufacturer is this: in-car native system, or rely on the apps in one’s smartphone? Ford’s industry breakthrough MyFord Touch with SYNC by Microsoft is native and based upon Microsoft Auto Platform (now called Windows Embedded Automotive 7). Elsewhere, premium brands like BMW, Lexus and Cadillac have designed self-contained systems from the ground up. Some, like BMW, include in-car cellular modems. Others rely on the smartphone only for music and Internet access, but that’s it.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps. (Courtesy of Chevrolet.)

Still others, like Toyota and Chevrolet use a technology called MirrorLink to “mirror” the smartphone’s screen onto the car’s larger IVI. For all apps that make sense to be viewed on the IVI, the system will display them — usually identically to what the user sees on the smartphone (subject to safety and distraction caveats).

MirrorLink is now a trademarked standard owned by the Car Connectivity Consortium that’s designed specifically for cars and smartphones. That means the standard worries about driver distractions, apps that make sense for drivers (such as Google Maps) and those that don’t (such as a panoramic camera stitching application). Apps have to be qualified for use with MirrorLink.

As well, MirrorLink replaces the phone’s touch I/O with in-car I/O such as steering wheel controls, console joysticks, or the IVI head unit’s touchscreen or bezel buttons. Equally as important, audio input from microphones is routed from the car to the phone, while output uses the car’s speakers. The car’s antennae for radio and GPS will be given preference over the phone’s, improving the signal reception.  The protocols between smartphone and car also take input from the vehicle’s CANbus, including speed. This means that you can check your email when parked, but not while driving. A great resource for how it works and what the future holds is here.

MirrorLink started as a Nokia idea that was intended for smartphone-to-car connectivity. Now at version 1.1, it’s a client-server architecture where the IVI head unit is the USB host.  It uses industry-standard protocols such as Internet Protocol (IP), USB, Wi-Fi, Bluetooth (BT HFP for telephony, BT A2DP for media), RTP, and UPnP. Recent additions use The Trusted Computing Group concepts of device attestation protocols with SKSD/PKSD keys via authentication. The actual screen sharing uses the VNC protocol.

MirrorLink and Trusted Computing Group authentication process for trusted content.

MirrorLink and Trusted Computing Group authentication process for trusted content. (Courtesy of Car Connectivity Consortium.)

What MirrorLink doesn’t yet support is video streaming, since drivers watching video is a no-no is cars (tell that to the Japanese who I’ve seen with TVs mounted in their cars!).

Android and Miracast
Miracast, on the other hand, is all about streaming. It’s a Wi-Fi Alliance spec recently demoed at CES 2013 that’s designed to stream video and photos from smartphones, tablets, and future embedded devices. Like Apple’s AirPlay, it moves stuff from a small screen onto a big TV screen. It’s based upon Wi-Fi’s not-new-but-rarely-used Wi-Fi Direct standard (WiDi 3.5) that avoids routers to establish peer-to-peer connectivity.

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

Miracast supports 1080p HD video, 5.1 surround, and CPUs from nVidia, TI, Qualcomm, Marvell and others have announced plans to support it. Built into the spec is the ability to stream DRM and HDCP protected content using already established HDMI and DisplayPort style copy protection schemes. I guess they figure if you’ve got the rights to play it on your phone, might as well play it on your TV too.

Last Fall, Google updated Android Jelly Bean to 4.2 and included Miracast as part of the update, and I’m thrilled that my Nexus 7 tablet can now, in theory, stream content to my Samsung Smart TV. As Android proliferates throughout the embedded market, I can envision commercial applications where a user might do more than stream a video to another embedded device. Sharing the entire smartphone’s screen can be useful for PowerPoint presentations or demoing just about any Android app in existence. If it’s on the phone’s screen, it can get mirrored via Wi-Fi to another screen.

Will MirrorLink and Miracast Converge?
I doubt the two standards will merge. MirrorLink is exclusively aimed at IVI systems in cars, and the closely curated standard is intended to vet applications to assure safe operation in a vehicle. Miracast is similar in that it mirrors a smartphone’s screen, but there are no limitations on moving between screens, so Miracast is clearly the superset standard to a broader market.

Ironically, as the Car Connectivity Consortium looks to release MirrorLink Version 2.0, they’re examining Miracast as a way to provide an “alternative video link” for streaming H.264 1080p@30 FPS into the car cabin.

Why? For passenger entertainment. Think about minivans (shudder) and Suburbans loaded with kids.

Tizen OS for Smartphones – Intel’s Biggest Bet Yet

Tizen HTML5 from Intel and Linux Foundation to be used by Samsung handsets in 2013 mobile.

Figure 1: Intel and the Linux Foundation collaborated on Tizen, an open source HTML5-based platform for smartphones, IVI, and other embedded devices.

[Update on 27 February 2013: At the recent 2013 Mobile World Congress in Barcelona, Samsung demoed a development handset running Tizen. CNET editor Luke Westaway posted a video review of the device which showed snappy performance, Android-like features, but felt that the early version was "a bit rough around the edges". Still, to see Tizen running on actual consumer hardware gives it cred.  A larger review by CNET's Roger Cheng can be found here: http://cnet.co/15R8xs3 ]

[8 Jan 2013 Update: Added "Disclosure" below and fixed some typos.]

Disclosure: As of 8 Jan 2013, I became a paid blogger for Intel’s ‘Roving Reporter’ embedded Intelligent Systems Alliance (edc.intel.com). But my opinion here is my own, and I call it like I see it.

Samsung hedges Apple, Google bets with Intel’s HTML5-based Tizen

Just when you thought the smartphone OS market was down to a choice between iOS and Android, Intel-backed Tizen jumps into the fray (Figure 1).  Tizen is Intel’s next kick at the can for mobile, and it’s joining several OS wannabes:  Microsoft Windows Phone 8, RIM Blackberry’s whatever-they’re-going-to-announce on 31 January 2013, and eventually Ubuntu phone platform.

Figure 2: On 3 January 2013 Ubuntu announced a plan to offer a smartphone OS. Key feature: use the phone as a computing platform and even drive a desktop monitor.

Samsung  Prepares to “Date” Other Partners

Samsung Electronics announced on 3 January that it will start selling smartphones sometime this year using Tizen as the OS platform. Samsung’s spokesperson didn’t elaborate on timing or models, but said in an emailed statement ”We plan to release new, competitive Tizen devices…and keep expanding the lineup.”

Tizen is the third incarnation of Intel’s attempts at building an embedded ecosystem which included MeeGo and Moblin. Tizen, in collaboration with The Linux Foundation, was announced mid-2011 and has been quietly gestating in the background and is now on Release 2.0. One of the largest supporters of Tizen is Samsung, so the recent announcement is no surprise.

Samsung no doubt seeks a back-up plan as Google’s Android OS has flown past Apple’s iOS as the predominant operating system for mobile devices  plus tablets (75%; Figure 3).

Figure 3: Android is now the predominant smartphone OS in 2012, according to IDC. (Source: IDC; http://www.idc.com/getdoc.jsp?containerId=prUS23818212 ).

As Samsung is now the world’s largest smartphone supplier (Figure 4), the company might be following a play from Apple in seeking to control more of its own destiny through Tizen.

Figure 4: IC Insights – and most other analyst firms – rank Samsung as the world’s largest smartphone supplier. This data is from 28 November 2012.(Source: IC Insights; http://www.icinsights.com/news/bulletins/Samsung-And-Apple-Set-To-Dominate-2012-Smartphone-Market/)

And with Samsung and Apple’s patent dispute nastiness, along with rumblings over whether Samsung may or may not continue to supply processors for iPhones, Tizen represents one more way for Samsung to control their own destiny separate from Google and Apple.

Intel’s Mobile Imperative Needs HTML5

Intel, on the other hand, desperately needs more wins in the mobile space.  Last year I blogged how the company gained some traction by announcing several Atom (Medfield) SoC-based handset wins,  but the company has gone on record stating their real goal is to be inside mobile devices from Apple, Samsung or both. In fact, it’s a bet-the-farm play for Intel and it most likely pushed Intel CEO Paul Otellini into his future retirement plans.

The general embedded market is closely following what happens in mobile, adopting low-power ARM SoCs and Atom CPUs, using wireless Wi-Fi and NFC radios for M2M nodes, and deploying Android for both headed and headless systems such as POS and digital signage. If Tizen moves the needle in smartphones for Samsung, chances are it’ll be used by other players. With HTML5, it will be straightforward to port applications and data across hardware platforms – a goal that Intel’s EVP Renee James  touted at 2012′s Intel Developers Forum (Figure 5).

Figure 5: Intel’s Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

 

Tizen is based upon HTML5 with plans to achieve the old Java “write once, run anywhere” promise.   For Intel, the Tizen SDK and API means that applications written for the most popular mobile processors – such as Qualcomm’s Snapdragon or nVidia’s Tegra 3 – could easily run on Intel processors. In fact, at IDF Intel posited a demo of a user’s application running first on a home PC, then a smart phone, then a connected in-vehicle infotainment (IVI) system, and then finally on an office platform. Intel’s Renee James explained that it matters not what underlying hardware runs the application – HTML5 allows seamless migration across any and all devices.

Tizen Stakes for Intel and Samsung

This pretty much sums up the Tizen vision, both for Intel and for Samsung. Tizen means freedom, as it abstracts the hardware from any application.

If successful, Tizen opens up processor sockets to Intel as mobile vendors swap CPUs. Tizen also allows Samsung to choose any processor, while relying on open source and open standards-based code supported by The Linux Foundation.