Design Resources: USB 3.1 and Type-C

By: Chris A. Ciufo, Editor, Embedded Systems Engineering

An up-to-date quick reference list for engineers designing with Type-C.

USB 3.1 and its new Type-C connector are likely in your design near-future. USB 3.1 and the Type-C connector run at up to 10 Gbps, and Type-C is the USB-IF’s “does everything” connector that can be inserted either way (and never is upside down). The Type-C connector also delivers USB 3.1 speeds plus other gigabit protocols simultaneously, including DisplayPort, HDMI, Thunderbolt, PCI Express and more.

Also new or updated are the Battery Charging (BC) and Power Delivery (PD) specifications that provide up to 100W of charge capability in an effort to eliminate the need for a drawer full of incompatible wall warts.

If you’ve got USB 3.1 “SuperSpeed+” or the Type-C connector in your future, here’s a recent list of design resources, articles and websites that can help get you up to speed.

Start Here: The USB Interface Forum governs all of these specs, with lots of input from industry partners like Intel and Microsoft. USB 3.1 (it’s actually Gen 2), Type-C, and PD information is available via the USB-IF and it’s the best place to go for the actual details (note the hotlinks). Even if you don’t read them now, you know you’re going to need to read them eventually.

“Developer Days” The USB-IF presented this two-day seminar in Taipei last November 2015. I’ve recently discovered the treasure trove of preso’s located here (Figure 1). The “USB Type-C Specification Overview” is the most comprehensive I’ve seen lately.

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

What is Type-C? Another decent 1,000-foot view is my first article on Type-C: “Top 3 Essential Technologies for Ultra-mobile, Portable Embedded Systems.” Although the article covers other technologies, it compares Type-C against the other USB connectors and introduces designers to the USB-IF’s Battery Charging (BC) and Power Delivery (PD) specifications.

What is USB? To go further back to basics, “3 Things You Need to Know about USB Switches” starts at USB 1.1 and brings designers up to USB 3.0 SuperSpeed (5 Gbps). While the article is about switches, it also reminds readers that at USB 3.0 (and 3.1) speeds, signal integrity can’t be ignored.

USB Plus What Else? The article “USB Type-C is Coming…” overlays the aforementioned information with Type-C’s sideband capabilities that can transmit HDMI, DVI, Thunderbolt and more. Here, the emphasis is on pins, lines, and signal integrity considerations.

More Power, Scotty! Type-C’s 100W Power Delivery sources energy in either direction, depending upon the enumeration sequence between host and target. Components are needed to handle this logic, and the best source of info is from the IC and IP companies. A recent Q&A we did with IP provider Synopsys “Power Where It’s Needed…” goes behind the scenes a bit, while TI’s E2E Community has a running commentary on all things PD. The latter is a must-visit stop for embedded designers.

Finally, active cables are the future as Type-C interfaces to all manner of legacy interfaces (including USB 2.0/3.0). At last year’s IDF 2015, Cypress showed off dongles that converted between specs. Since then, the company has taken the lead in this emerging area and they’re the first place to go to learn about conversions and dongles (Figure 2).

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Evolving Future: Although USB 3.1 and the Type-C connector are solid and not changing much, IC companies are introducing more highly integrated solutions for the BC, PD and USB 3.1 specifications plus sideband logic. For example, Intel’s Thunderbolt 3 uses Type-C and runs up to 40 Gbps, suggesting that Type-C has substantial headroom and more change is coming. My point: expect to keep your USB 3.1 and Type-C education up-to-date.

Intel Changes Course–And What a Change!

By Chris A. Ciufo, Editor, Embedded Intel Solutions

5 bullets explain Intel’s recent drastic course correction.

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

I recently opined on the amazing technology gifts Intel has given the embedded industry as the company approaches its 50th anniversary. Yet a few weeks later, the company released downward financials and announced layoffs, restructurings, executive changes and new strategies. Here are five key points from the recent news-storm of (mostly) negative coverage.

1. Layoffs.

Within days of the poor financial news, Intel CEO Brian Krzanich (“BK”) announced that 12,000 loyal employees would have to go. As the event unfolded over a few days, the pain was felt throughout Intel: from the Oregon facility where its IoT Intelligent Gateway strategy resides, to its design facilities in Israel and Ireland, to older fabs in places like New Mexico. Friends of mine at Intel have either been let go or are afraid for their jobs. This is the part about tech—and it’s not limited to Intel, mind you—that I hate the most. Sometimes it feels like a sweatshop where workers are treated poorly. (Check out the recent story concerning BiTMICRO Networks, which really did treat its workers poorly.)

2. Atom family: on its way out. 

This story broke late on the Friday night after the financial news—it was almost as if the company hadn’t planned on talking about it so quickly. But the bottom line is that the Atom never achieved all the goals Intel set out for it: lower price, lower power and a spot in handheld. Of course, much is written about Intel’s failure to wrest more than a token slice out of ARM’s hegemony in mobile. (BTW: that term “hegemony” used to be applied to Intel’s dominance in PCs. Sigh.) Details are still scant, but the current Atom Bay Trail architecture works very nicely, and I love my Atom-based Win8.1 Asus 2:1 with it. But the next Atom iteration (Apollo Lake) looks like the end of the line. Versions of Atom may live on under other names like Celeron and Pentium (though some of these may also be Haswell or Skylake versions).

3. New pillars announced.

Intel used to use the term “pillars” for its technology areas, and BK has gone to great lengths to list the new ones as: Data Center (aka: Xeon); Memory (aka: Flash SSDs and the Optane, 3D XPoint Intel/Micro joint venture); FPGAs (aka: Altera, eventually applied to Xeon co-accelerators); IoT (aka: what Intel used to call embedded); and 5G (a modem technology the company doesn’t really have yet). Mash-ups of these pillars include some of the use cases Intel is showing off today, such as wearables, medical, drones (apparently a personal favorite of BK), RealSense camera, and smart automobiles including self-driving cars. (Disclosure: I contracted to Intel in 2013 pertaining to the automotive market.)

 Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

4. Tick-tock goodbye.

For many years, Intel has set the benchmark for process technology and made damn sure Moore’s Law was followed. The company’s cadence of new architecture (Tock) followed by process shrink (Tick) predictably streamed products that found their way into PCs, laptops, the data center (now “cloud” and soon “fog”). But as Intel approached 22nm, it got harder and harder to keep up the pace as CMOS channel dimensions approached Angstroms (inter-atomic distances). The company has now officially retired Tick-Tock in favor of a three-step process of Architecture, Process, and Process tuning. This is in fact where the company is today as the Core series evolved from 4th-gen (Haswell) to 5th-gen (Broadwell—a sort-of interim step) to the recent 6th-gen (Skylake). Skylake is officially a “Tock,” but if you work backwards, it’s kind of a fine-tuned process improvement with new features such as really good graphics, although AnandTech and others lauded Broadwell’s graphics. The next product—Kaby Lake (just “leaked” last week, go figure)—looks to be another process tweak. Now-public specs point to even better graphics, if the data can be believed.

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

5. Embedded, MCUs, and Value-Add.

This last bullet is my prediction of how Intel is going to climb back out of the rut. Over the years the company mimicked AMD and nearly singularly focused on selling x86 CPUs and variants (though it worked tirelessly on software like PCIe, WiDi, Android, USB Type-C and much more). It jettisoned value-add MCUs like the then-popular 80196 16-bitter with A/D and 8751EPROM-based MCU—conceding all of these products to companies like Renesas (Hitachi), Microchip (PIC series), and Freescale (ARM and Power-based MCUs, originally for automotive). Yet Intel can combine scads of its technology—including modems, WiFi (think: Centrino), PCIe, and USB)—into intelligent peripherals for IoT end nodes. Moreover, the company’s software arsenal even beats IBM (I’ll wager) and Intel can apply the x86 code base and tool set to dozens of new products. Or, they could just buy Microchip or Renesas or Cypress.

It pains me to see Intel layoff people, retrench, and appear to fumble around. I actually do think it is shot-gunning things just a bit right now, and officially giving up on developing low-power products for smartphones. Yet they’ll need low power for IoT nodes, too, and I don’t know that Quark and Curie are going to cut it. Still: I have faith. BK is hell-fire-brimstone motivated, and the company is anything but stupid. Time to pick a few paths and stay the course.

Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

The Secret World of USB Charging

There’s a whole set of USB charging specs you’ve probably never heard of because big-battery smartphones, tablets and 2:1’s demand shorter charge times.

Editor’s note: this particular blog posting is sponsored by Pericom Semiconductor.  

$5 chargers useNow that you can buy $5 USB chargers everywhere (mains- and cigarette lighter-powered), it’s tempting to think of them like LED flashlights: cheap commodity throw-aways. And you would’ve been right…until now.

My recent purchase of an Asus T100 Transformer Windows 8.1/Intel Atom 2:1 tablet hybrid forced me to dig into USB charging (Figure).

My own Asus T100 Transformer Book has a “unique” USB charging profile.  (Courtesy: Asus.)

My own Asus T100 Transformer Book has a “unique” USB charging profile.
(Courtesy: Asus.)

This device is fabulous with its convenient micro USB charging port with OTG support. No bulky wall wart to lug around. But it refuses to charge normally from any charger+cable except for the (too short) one that came with it.

My plethora of USB chargers, adapters, powered hubs and more will only trickle charge the T100 and take tens of hours. And it’s not just the device’s 2.0A current requirement, either. There’s something more going on.

Just Say “Charge it!”

The USB Innovators Forum (USB-IF) has a whole power delivery strategy with goals as shown below. Simply stated, USB is now flexible enough to provide the right amount of power to either end of the USB cable.

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

There’s even a USB Battery Charging (UBC) compliance specification called “BC1.2” to make sure devices follow the rules. Some of the new power profiles are shown below:

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

The reason for UBC is that newer devices like Apple’s iPad, Samsung’s Galaxy S5 and Galaxy Tab devices–and quite possibly my Asus T100 2:1–consume more current and sometimes have the ability to source power to the host device. UBC flexibly delivers the right amount of power and can avoid charger waste.

Communications protocols between the battery’s MCU and the charger’s MCU know how to properly charge a 3000mAh to 10,000mAh battery. Battery chemistry matters, too. As does watching out for heat and thermal runaway; some USB charger ICs take these factors into account.

Apple, ever the trend-setter (and master of bespoke specifications) created their own proprietary fast charging profiles called Apple 1A, 2A and now 2.4A. The Chinese telecom industry has created their own called YD/T1591-2009. Other suppliers of high-volume devices have or are working on bespoke charging profiles.

Fast, proper rate charging from Apple, Samsung and others is essential as harried consumers increasingly rely on mobile devices more than laptops. Refer to my complaint above RE: my Asus T100.

Who has time to wait overnight?!

USB Devices Available

Pericom Semiconductor, who is sponsoring this particular blog posting, has been an innovator in USB charging devices since 2007. With a growing assurance list of charge-compatible consumer products, the company has a broad portfolio of USB ICs.

Take the automotive-grade PI5USB8000Q, for instance. Designed for the digital car, this fast charger supports all of the USB-IF BC modes per BC1.2, Apple 1A and 2A, and the Chinese telecom standard. The IC powers down when there’s no load to save the car’s own battery, and can automatically detect the communication language to enable the proper charging profile (Figure). Pretty cool, eh?

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

As For My Asus 2:1?

Sadly, I can’t figure out how the T100 “talks” with its charger, or if there’s something special about its micro USB cable. So I’m stuck.

But if you’re designing a USB charger, a USB device, or just powering one, Pericom’s got you covered. That’s a secret to get all charged up about.

HTML5 Is What’s Needed To Rapidly Develop IVI Automotive Apps

HTML5 logo

Car manufacturers know that in-car technology like navigation systems sells cars. The pace of the smartphone movement is impacting the painfully slow speed with which automotive manufacturers develop new cars and tech features. Consumers trade out their phones every 2 years, but a two year old car is still considered nearly “new” by Kelly Blue Book. So how can the auto OEMs satisfy consumers’ tastes for updated, red-hot in-vehicle infotainment (IVI) systems and add-on Apps?

Elektrobit speaks about HTML5, IVI, and HMI for automotive markets

Automotive software supplier Elektrobit thinks HTML5 is the answer. Coincidentally, so does RIM’s QNX division, along with Intel.  QNX supplies “CAR 2″ software to several auto OEMs, and Intel is behind Tizen, an HTML5-based competitor to Android.  While Samsung has endorsed Tizen for a handful of smartphones, Intel has publicly stated that Tizen is also targeting automotive IVI systems as I wrote about here.

At a webinar today (5 March 2013) hosted by Automotive World magazine, Elektrobit’s VP of Automotive Rainer Holve, argued that HTML5 is the perfect language in which to develop and deploy the fast-changing IVI HMI software. Most importantly, the car’s core “native” IVI functions should stay separate and subject to safety-critical coding practices.

By partitioning the IVI software in this manner, the two ecosystems are decoupled and can run on their own market- and OEM-driven schedules.  This means that native IVI–like GPS navigation, audio, HVAC, or OBDII diagnostic information like fuel consumption–can be developed slowly and methodically on the typical 2-5+ year automobile OEM cycle.

But the faster moving, consumer smartphone inspired IVI portion, and its fast moving add-on Apps ecosystem, can move very, very quickly. This allows consumers to refresh not only the Apps, but alows the OEMs to upgrade the entire HMI experience every few years without having to replace the whole car.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

While the OEMs would love for an HMI refresh to force the consumer to replace the car every two years, it’s not going to happen. HMTL5 is a reasonable alternative and they know it. According to Elektrobit, Chrysler, GM, and Jaguar/Land Rover (JLR) have already started projects with HTML5.

HTML5 is an “evolution and cleanup of previous HTML standards,” said Elektrobit’s Holve, and is composed of HTML+CSS+JavaScript, along with new features for A/V, 2D graphics canvas, a 3D API, support for hardware acceleration, and much more.  HTML5 is based upon open standards and is supported by Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C). Independently, W3C is working on a standardized API for JavaScript, which makes the HTML5 value proposition even sweeter.

Besides decoupling the HMI software from the “core” HMI functions, HTML5 would allow third-party Apps developers to swiftly write and deploy applications for IVI systems. Besides Internet connectivity itself, this is the one IVI feature that consumers demand: a choice of what Apps to add whenever they so choose. And since every automobile OEM will have to certify an App for safe in-vehicle use with their particular system, HTML5 allows App developers to create one core App that can be easily modified for multiple manufacturers and their myriad (and differentiated) vehicle models.  In short: HTML5 makes things easier for everyone, yet still allows a robust third-party market to flourish.

It’s important to note how this is both similar to, and differs from, the current IVI strategy of many OEMs that rely solely on the smartphone for Apps. Chevrolet, Peugeot, Renault, Toyota and v others tether the smartphone to the IVI system and “mirror” the phone’s Apps on the screen (see my blog on Mirroring). This allows the wildly robust iOS and Android App ecosystems into the car (and soon RIM/Blackberry and Windows 8 Phone), but it comes at a price.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

In this scenario, the auto OEM must certify every App individually for use in their vehicle to assure safety or that critical car systems can’t be hacked or compromised. Or, the OEM can allow all Apps to run and hope for the best. One hopes a rogue App doesn’t access the CAN bus and apply the ABS or electric steering.

HTML5, on the other hand, gently forces developers to create Apps destined for IVI systems, but adds only a slight burden on them to make minor changes for each manufacturer’s certification. In this way they’re not barred from the car indiscriminately, but can develop a business of IVI apps separate from their smartphone iOS, Android and other Apps.

Intel's Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Intel’s Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Will HTML5 be successful? Is it the right answer for the rabid consumer’s taste for car tech, while still giving the auto manufacturer the safety and security they’re required to offer by law? I was skeptical about Tizen until Samsung’s announcements at Mobile World Congress 2013 last month. With Tizen pushing HTML5 for “openness”, it may just gain traction in automotive, too.

Watch this space. We’ll keep you updated.