Design Resources: USB 3.1 and Type-C

By: Chris A. Ciufo, Editor, Embedded Systems Engineering

An up-to-date quick reference list for engineers designing with Type-C.

USB 3.1 and its new Type-C connector are likely in your design near-future. USB 3.1 and the Type-C connector run at up to 10 Gbps, and Type-C is the USB-IF’s “does everything” connector that can be inserted either way (and never is upside down). The Type-C connector also delivers USB 3.1 speeds plus other gigabit protocols simultaneously, including DisplayPort, HDMI, Thunderbolt, PCI Express and more.

Also new or updated are the Battery Charging (BC) and Power Delivery (PD) specifications that provide up to 100W of charge capability in an effort to eliminate the need for a drawer full of incompatible wall warts.

If you’ve got USB 3.1 “SuperSpeed+” or the Type-C connector in your future, here’s a recent list of design resources, articles and websites that can help get you up to speed.

Start Here: The USB Interface Forum governs all of these specs, with lots of input from industry partners like Intel and Microsoft. USB 3.1 (it’s actually Gen 2), Type-C, and PD information is available via the USB-IF and it’s the best place to go for the actual details (note the hotlinks). Even if you don’t read them now, you know you’re going to need to read them eventually.

“Developer Days” The USB-IF presented this two-day seminar in Taipei last November 2015. I’ve recently discovered the treasure trove of preso’s located here (Figure 1). The “USB Type-C Specification Overview” is the most comprehensive I’ve seen lately.

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

Figure 1: USB-IF held a “Developer Days” forum in Taipei November 2015. These PPT’s are a great place to start your USB 3.1/Type-C education. (Image courtesy: USB-IF.org.)

What is Type-C? Another decent 1,000-foot view is my first article on Type-C: “Top 3 Essential Technologies for Ultra-mobile, Portable Embedded Systems.” Although the article covers other technologies, it compares Type-C against the other USB connectors and introduces designers to the USB-IF’s Battery Charging (BC) and Power Delivery (PD) specifications.

What is USB? To go further back to basics, “3 Things You Need to Know about USB Switches” starts at USB 1.1 and brings designers up to USB 3.0 SuperSpeed (5 Gbps). While the article is about switches, it also reminds readers that at USB 3.0 (and 3.1) speeds, signal integrity can’t be ignored.

USB Plus What Else? The article “USB Type-C is Coming…” overlays the aforementioned information with Type-C’s sideband capabilities that can transmit HDMI, DVI, Thunderbolt and more. Here, the emphasis is on pins, lines, and signal integrity considerations.

More Power, Scotty! Type-C’s 100W Power Delivery sources energy in either direction, depending upon the enumeration sequence between host and target. Components are needed to handle this logic, and the best source of info is from the IC and IP companies. A recent Q&A we did with IP provider Synopsys “Power Where It’s Needed…” goes behind the scenes a bit, while TI’s E2E Community has a running commentary on all things PD. The latter is a must-visit stop for embedded designers.

Finally, active cables are the future as Type-C interfaces to all manner of legacy interfaces (including USB 2.0/3.0). At last year’s IDF 2015, Cypress showed off dongles that converted between specs. Since then, the company has taken the lead in this emerging area and they’re the first place to go to learn about conversions and dongles (Figure 2).

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Figure 2: In the Cypress booth at IDF 2015, the company and its partners showed off active cables and dongles. Here, Type-C (white) converts to Ethernet, HDMI, VGA, and one more I don’t recognize. (Photo by Chris A. Ciufo, 2015.)

Evolving Future: Although USB 3.1 and the Type-C connector are solid and not changing much, IC companies are introducing more highly integrated solutions for the BC, PD and USB 3.1 specifications plus sideband logic. For example, Intel’s Thunderbolt 3 uses Type-C and runs up to 40 Gbps, suggesting that Type-C has substantial headroom and more change is coming. My point: expect to keep your USB 3.1 and Type-C education up-to-date.

Intel Changes Course–And What a Change!

By Chris A. Ciufo, Editor, Embedded Intel Solutions

5 bullets explain Intel’s recent drastic course correction.

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

Intel CEO Brian Krzanich (Photo by author, IDF 2015.)

I recently opined on the amazing technology gifts Intel has given the embedded industry as the company approaches its 50th anniversary. Yet a few weeks later, the company released downward financials and announced layoffs, restructurings, executive changes and new strategies. Here are five key points from the recent news-storm of (mostly) negative coverage.

1. Layoffs.

Within days of the poor financial news, Intel CEO Brian Krzanich (“BK”) announced that 12,000 loyal employees would have to go. As the event unfolded over a few days, the pain was felt throughout Intel: from the Oregon facility where its IoT Intelligent Gateway strategy resides, to its design facilities in Israel and Ireland, to older fabs in places like New Mexico. Friends of mine at Intel have either been let go or are afraid for their jobs. This is the part about tech—and it’s not limited to Intel, mind you—that I hate the most. Sometimes it feels like a sweatshop where workers are treated poorly. (Check out the recent story concerning BiTMICRO Networks, which really did treat its workers poorly.)

2. Atom family: on its way out. 

This story broke late on the Friday night after the financial news—it was almost as if the company hadn’t planned on talking about it so quickly. But the bottom line is that the Atom never achieved all the goals Intel set out for it: lower price, lower power and a spot in handheld. Of course, much is written about Intel’s failure to wrest more than a token slice out of ARM’s hegemony in mobile. (BTW: that term “hegemony” used to be applied to Intel’s dominance in PCs. Sigh.) Details are still scant, but the current Atom Bay Trail architecture works very nicely, and I love my Atom-based Win8.1 Asus 2:1 with it. But the next Atom iteration (Apollo Lake) looks like the end of the line. Versions of Atom may live on under other names like Celeron and Pentium (though some of these may also be Haswell or Skylake versions).

3. New pillars announced.

Intel used to use the term “pillars” for its technology areas, and BK has gone to great lengths to list the new ones as: Data Center (aka: Xeon); Memory (aka: Flash SSDs and the Optane, 3D XPoint Intel/Micro joint venture); FPGAs (aka: Altera, eventually applied to Xeon co-accelerators); IoT (aka: what Intel used to call embedded); and 5G (a modem technology the company doesn’t really have yet). Mash-ups of these pillars include some of the use cases Intel is showing off today, such as wearables, medical, drones (apparently a personal favorite of BK), RealSense camera, and smart automobiles including self-driving cars. (Disclosure: I contracted to Intel in 2013 pertaining to the automotive market.)

 Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

Intel’s new pillars, according to CEO Brian Krzanich. 5G modems are included in “Connectivity.” Not shown is “Moore’s Law,” which Intel must continue to push to be competitive.

4. Tick-tock goodbye.

For many years, Intel has set the benchmark for process technology and made damn sure Moore’s Law was followed. The company’s cadence of new architecture (Tock) followed by process shrink (Tick) predictably streamed products that found their way into PCs, laptops, the data center (now “cloud” and soon “fog”). But as Intel approached 22nm, it got harder and harder to keep up the pace as CMOS channel dimensions approached Angstroms (inter-atomic distances). The company has now officially retired Tick-Tock in favor of a three-step process of Architecture, Process, and Process tuning. This is in fact where the company is today as the Core series evolved from 4th-gen (Haswell) to 5th-gen (Broadwell—a sort-of interim step) to the recent 6th-gen (Skylake). Skylake is officially a “Tock,” but if you work backwards, it’s kind of a fine-tuned process improvement with new features such as really good graphics, although AnandTech and others lauded Broadwell’s graphics. The next product—Kaby Lake (just “leaked” last week, go figure)—looks to be another process tweak. Now-public specs point to even better graphics, if the data can be believed.

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

Intel is arguably the industry’s largest software developer, and second only to Google when it comes to Android. (Photo by author, IDF 2015.)

5. Embedded, MCUs, and Value-Add.

This last bullet is my prediction of how Intel is going to climb back out of the rut. Over the years the company mimicked AMD and nearly singularly focused on selling x86 CPUs and variants (though it worked tirelessly on software like PCIe, WiDi, Android, USB Type-C and much more). It jettisoned value-add MCUs like the then-popular 80196 16-bitter with A/D and 8751EPROM-based MCU—conceding all of these products to companies like Renesas (Hitachi), Microchip (PIC series), and Freescale (ARM and Power-based MCUs, originally for automotive). Yet Intel can combine scads of its technology—including modems, WiFi (think: Centrino), PCIe, and USB)—into intelligent peripherals for IoT end nodes. Moreover, the company’s software arsenal even beats IBM (I’ll wager) and Intel can apply the x86 code base and tool set to dozens of new products. Or, they could just buy Microchip or Renesas or Cypress.

It pains me to see Intel layoff people, retrench, and appear to fumble around. I actually do think it is shot-gunning things just a bit right now, and officially giving up on developing low-power products for smartphones. Yet they’ll need low power for IoT nodes, too, and I don’t know that Quark and Curie are going to cut it. Still: I have faith. BK is hell-fire-brimstone motivated, and the company is anything but stupid. Time to pick a few paths and stay the course.

Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

The Secret World of USB Charging

There’s a whole set of USB charging specs you’ve probably never heard of because big-battery smartphones, tablets and 2:1’s demand shorter charge times.

Editor’s note: this particular blog posting is sponsored by Pericom Semiconductor.  

$5 chargers useNow that you can buy $5 USB chargers everywhere (mains- and cigarette lighter-powered), it’s tempting to think of them like LED flashlights: cheap commodity throw-aways. And you would’ve been right…until now.

My recent purchase of an Asus T100 Transformer Windows 8.1/Intel Atom 2:1 tablet hybrid forced me to dig into USB charging (Figure).

My own Asus T100 Transformer Book has a “unique” USB charging profile.  (Courtesy: Asus.)

My own Asus T100 Transformer Book has a “unique” USB charging profile.
(Courtesy: Asus.)

This device is fabulous with its convenient micro USB charging port with OTG support. No bulky wall wart to lug around. But it refuses to charge normally from any charger+cable except for the (too short) one that came with it.

My plethora of USB chargers, adapters, powered hubs and more will only trickle charge the T100 and take tens of hours. And it’s not just the device’s 2.0A current requirement, either. There’s something more going on.

Just Say “Charge it!”

The USB Innovators Forum (USB-IF) has a whole power delivery strategy with goals as shown below. Simply stated, USB is now flexible enough to provide the right amount of power to either end of the USB cable.

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

There’s even a USB Battery Charging (UBC) compliance specification called “BC1.2” to make sure devices follow the rules. Some of the new power profiles are shown below:

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

The reason for UBC is that newer devices like Apple’s iPad, Samsung’s Galaxy S5 and Galaxy Tab devices–and quite possibly my Asus T100 2:1–consume more current and sometimes have the ability to source power to the host device. UBC flexibly delivers the right amount of power and can avoid charger waste.

Communications protocols between the battery’s MCU and the charger’s MCU know how to properly charge a 3000mAh to 10,000mAh battery. Battery chemistry matters, too. As does watching out for heat and thermal runaway; some USB charger ICs take these factors into account.

Apple, ever the trend-setter (and master of bespoke specifications) created their own proprietary fast charging profiles called Apple 1A, 2A and now 2.4A. The Chinese telecom industry has created their own called YD/T1591-2009. Other suppliers of high-volume devices have or are working on bespoke charging profiles.

Fast, proper rate charging from Apple, Samsung and others is essential as harried consumers increasingly rely on mobile devices more than laptops. Refer to my complaint above RE: my Asus T100.

Who has time to wait overnight?!

USB Devices Available

Pericom Semiconductor, who is sponsoring this particular blog posting, has been an innovator in USB charging devices since 2007. With a growing assurance list of charge-compatible consumer products, the company has a broad portfolio of USB ICs.

Take the automotive-grade PI5USB8000Q, for instance. Designed for the digital car, this fast charger supports all of the USB-IF BC modes per BC1.2, Apple 1A and 2A, and the Chinese telecom standard. The IC powers down when there’s no load to save the car’s own battery, and can automatically detect the communication language to enable the proper charging profile (Figure). Pretty cool, eh?

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

As For My Asus 2:1?

Sadly, I can’t figure out how the T100 “talks” with its charger, or if there’s something special about its micro USB cable. So I’m stuck.

But if you’re designing a USB charger, a USB device, or just powering one, Pericom’s got you covered. That’s a secret to get all charged up about.

The Soft(ware) Core of Qualcomm’s Internet of Everything Vision

Qualcomm supplements silicon with multiple software initiatives.

Qualcomm Snapdragon
Update 1: Added attribution to figures.
The numbers are huge: 50B connected devices; 7B smartphones to be sold by 2017; 1000x growth in data traffic within a few years. Underlying all of these devices in the Internet of Things…wait, the Internet of Everything…is Qualcomm. Shipping 700 million chipsets per year on top of a wildly successful IP creation business in cellular modem algorithms, plus being arguably #1 in 3G/4G/LTE with Snapdragon SoCs in smartphones, the company is now setting its sights on M2M connectivity. Qualcomm has perhaps more initiatives in IoT/IoE than any other vendor. Increasingly, those initiatives rely on the software necessary for the global M2M-driven IoT/IoE trend to take root.

Telit Wireless Devcon
Speaking at the Telit Wireless Devcon in San Jose on 15 October, Qualcomm VP Nakul Duggal of the Mobile Computing Division painted a picture showing the many pieces of the company’s strategy for the IoT/E. Besides the aforementioned arsenal of SnapDragon SoC and Gobi modem components, the company is bringing to bear Wi-Fi, Bluetooth, local radio (like NFC), GPS, communications stacks, and a vision for heterogeneous M2M device communication they call “dynamic proximal networking”. Qualcomm supplies myriad chipsets to Telit Wireless, and Telit rolls them into higher order modules upon which Telit’s customers add end-system value.

Over 8 Telit Wireless modules are based upon Qualcomm modems.

Over eight Telit Wireless modules are based upon Qualcomm modems, as presented at the Telit Wireless Devcon 2013.

But it all needs software in order to work. Here are a few of Qualcomm’s software initiatives.

Modem’s ARM and API Open to All
Many M2M nodes–think of a vending machine, or the much maligned connected coffee maker–don’t need a lot of intelligence to function. They collect data, perform limited functions, and send analytics and diagnostics to their remote M2M masters. Qualcomm’s Duggal says that the ARM processors in Qualcomm modems are powerful enough to perform that computational load. There’s no need for an additional CPU so the company is making available Java (including Java ME), Linux and ThreadX to run their 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm has also opened up the modem APIs and made available their IoT Connection Manager software to make it easier to write closer-to-the-metal code for modem. Duggal revealed that Qualcomm has partnered with Digi International in this effort as it applies to telematics market segments.

Leverage Smartphone Graphics
And some of those M2M devices on the IoE may have displays–simple UIs at first (like a vending machine)—but increasingly more complex as the device interacts with the consumer. A restaurant’s digital menu sign, for example, need not run a full blown PC and Windows Embedded operating system when a version of a Snapdragon SoC will do. After all, the 1080p HDMI graphics needs of an HTC One with S600 far outweigh those of a digital sign. Qualcomm’s graphics accelerators and signal processing algorithms can easily apply to display-enabled M2M devices. This applies doubly as more intelligence is pushed to the M2M node, alleviating the need to send reams of data up to the cloud for processing.

Digital 6th Sense: Context
Another area Duggal described as the “Digital 6th Sense” might be thought of as contextual computing. Smartphones or wearable fitness devices like Nike’s new FuelBand SE might react differently when they’re outside, at work, or in the home. More than just counting steps and communicating with an App, if the device knows where it is…including precisely where it is inside of a building…it can perform different functions. Qualcomm now includes the Atheros full RF spectrum of products including Bluetooth, Bluetooth LE, NFC, Wi-Fi and more. Software stacks for all of these enable connectivity, but code that meshes (no pun) Wi-Fi with GPS data provides outside and inside position information. Here, Qualcomm’s software melds myriad infrastructure technologies to provide inside positioning. A partnership with Cisco will bring the technology to consumer locations like shopping malls to coexist with Cisco’s Mobility Services Engine for location-based Apps.

Smart Start at Home
Finally, the smart home is another area ripe for innovation. Connected devices in the home range from the existing set-top box for entertainment, to that connected coffee pot, smart meter, Wi-Fi enabled Next thermostat and smoke/CO detector, home health and more. These disparate ecosystems, says Duggal, are similar only in their “heterogeneousness” in the home. That is: they were never designed to be interconnected. Qualcomm is taking their relationships with every smart meter manufacturer, their home gateway/backhaul designs, and their smartphone expertise, and rolling it into the new AllJoyn software effort.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener. Courtesy: Qualcomm and AllJoyn.org .

AllJoyn is an open source project that seeks to set a “common language for the Internet of Everything”. According to AllJoyn.org, the “dynamic proximal network” is created using a universal software framework that’s extremely lightweight. Qualcomm’s Duggal described the ability for a device to enumerate that it has a sensor, audio, display, or other I/O. Most importantly, Alljoyn is “bearer agnostic” across all leading OSes or connectivity mechanism.

AllJoyn connectivity diagram.

AllJoyn connectivity diagram. Courtesy: www.alljoyn.org .

If Qualcomm is to realize their vision of selling more modems and Snapdragon-like SoCs, making them play well together and exchange information is critical. AllJoyn is pretty new; a new Standard Client (3.4.0) was released on 9 October. It’s unclear to me right now how AllJoyn compares with Wind River’s MQTT-based M2M Intelligent Device Platform or Digi’s iDigi Cloud or Eurotech’s EveryWhere Device Framework.

Qualcomm’s on a Roll
With their leadership in RF modems and smartphone processors, Qualcomm is laser focused on the next big opportunity: the IoT/E. Making all of those M2M nodes actually do something useful will require software throughout the connected network. With so many software initiatives underway, Qualcomm is betting on their next big thing: the Internet of Everything. Software will be the company’s next major “killer app”.

Lattice Brings Reuse from Smartphones to General Embedded

Three new Lattice FPGA reference designs show how smartphone components such as cameras or SoCs with MIPI interfaces can be “bridged” into general embedded applications like automotive.

Lattice today announced three complete reference designs–based upon their FPGAs, of course–that show how users can reuse/repurpose smartphone designs in non-smartphone applications.  If the component, such as the SoC application processor, camera sensor, or even RF front-end, uses the industry-standard MIPI interface (Mobile Industry Processor Interface), Lattice has a way to bridge that device to other components.

Lattice FPGAs in three new reference designs bridge smartphone MIPI...to any embedded design.

Lattice FPGAs in three new reference designs bridge smartphone MIPI…to any embedded design.

Automobiles, for instance, now incorporate forward-looking cameras for lane departure, and driver-facing cameras to gauge alertness. Neither of these applications are smartphones but with Lattice’s help could utilize low-power smartphone CMOS image sensors.

Subaru's EyeSight system uses twin forward-facing cameras for lane departure and other adaptive safety features. (Courtesy: Subaru of America.)

Subaru’s EyeSight system uses twin forward-facing cameras for lane departure and other adaptive safety features. (Courtesy: Subaru of America.)

It’s an interesting concept that’s not only useful, but brings Lattice back into focus as a niche supplier of specialty FPGAs that aren’t behemoth power processors like those from Altera or Xilinx.  I say hat’s off to Lattice. We’ll watch this evolve and keep you updated.

 

 

Intel’s Atom Roadmap Makes Smartphone Headway

After being blasted by users and pundits over the lack of “low power” in the Atom product line, new architecture and design wins show Intel’s making progress.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

A 10-second Google search on “Intel AND smartphone” reveals endless pundit comments on how Intel hasn’t been winning enough in the low power, smartphone and tablet markets.  Business publications wax endlessly on the need for Intel’s new CEO Brian Krzanich to make major changes in company strategy, direction, and executive management in order to decisively win in the portable market. Indications are that Krzanich is shaking things up, and pronto.

Forecasts by IDC (June 2013) and reported by CNET.com (http://news.cnet.com/8301-1035_3-57588471-94/shipments-of-smartphones-tablets-and-oh-yes-pcs-to-top-1.7b/) peg the PC+smartphone+tablet TAM at 1.7B units by 2014, of which 82 percent (1.4B units, $500M USD) are low power tablets and smart phones. And until recently, I’ve counted only six or so public wins for Intel devices in this market (all based upon the Atom Medfield SoC with Saltwell ISA I wrote about at IDF 2012). Not nearly enough for the company to remain the market leader while capitalizing on its world-leading tri-gate 3D fab technology.

Behold the Atom, Again

Fortunately, things are starting to change quickly. In June, Samsung announced that the Galaxy Tab 3 10.1-inch SKU would be powered by Intel’s Z2560 “Clover Trail+” Atom SoC running at 1.2GHz.  According to PC Magazine, “it’ll be the first Intel Android device released in the U.S.” (http://www.pcmag.com/article2/0,2817,2420726,00.asp)and it complements other Galaxy Tab 3 offerings with competing processors. The 7-inch SKU uses a dual-core Marvell chip running Android 4.1, while the 8-inch SKU uses Samsung’s own Exynos dual-core Cortex-A9 ARM chip running Android 4.2. The Atom Z2560 also runs Android 4.2 on the 10.1-incher. Too bad Intel couldn’t have won all three sockets, especially since Intel’s previous lack of LTE cellular support has been solved by the company’s new XMM 7160 4G LTE chip, and supplemented by new GPS/GNSS silicon and IP from Intel’s ST-Ericsson navigation chip acquisition.

The Z2560 Samsung chose is one of three “Clover Trail+” platform SKUs (Z2760, Z2580, Z2560) formerly known merely as “Cloverview” when the dual-core, Saltwell-based, 32-nm Atom SoCs were leaked in Fall 2012. The Intel alphabet soup starts getting confusing because the Atom roadmap looks like rush hour traffic feeding out of Boston’s Sumner tunnel. It’s being pushed into netbooks (for maybe another quarter or two); value laptops and convertible tablets as standalone CPUs; smartphones and tablets as SoCs; and soon into the data center to compete against ARM’s onslaught there, too.

Clover Trail+ replaces Intel’s Medfield smartphone offering and was announced at February’s MWC 2013. According to Anandtech.com (thank you, guys!) Intel’s aforementioned design wins with Atom used the 32nm Medfield SoC for smartphones. Clover Trail is still at 32nm using the Saltwell microarchitecture but has targeted Windows 8 tablets, while Clover Trail+ targets only smartphones and non-Windows Tablets. That explains the Samsung Galaxy Tab 3 10.1-inch design win. The datasheet for Clover Trail+ is here, and shows a dual-core SoC with multiple video CODECs, integrated 2D/3D graphics, on-board crypto, multiple multimedia engines such as Intel Smart Sound, and it’s optimized for Android and presumably, Intel/Samsung’s very own HTML5-based Tizen OS (Figure 1).

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

I was unable to find meaningful power consumption numbers for Clover Trail+, but it’s 32nm geometry compares favorably to ARM’s Cortex-A15 28nm geometry so Intel should be in the ballpark (vs Medfield’s 45nm). Still, the market wonders if Intel finally has the chops to compete. At least it’s getting much, much closer–especially once the on-board graphics performance gets factored into the picture compared to ARM’s lack thereof (for now).

Silvermont and Bay Trail and…Many More Too Hard to Remember

But Intel knows they’ve got more work to do to compete against Qualcomm’s home-grown Krait ARM-based ISA, some nVidia offerings, and Samsung’s own in-house designs. Atom will soon be moving to 22nm and the next microarchitecture is called Silvermont. Intel is finally putting power curves up on the screen, and at product launch I’m hopeful there will be actual Watt numbers shown, too.

For example, Intel is showing off Silvermont’s “industry-leading performance-per-Watt efficiency” (Figure 2). Press data from Intel says the architecture will offer 3x peak performance, or 5x lower power compared to the Clover Trail+ Saltwell microarchitecture. More code names to track: the quad-core Bay Trail SoC for 2013 holiday tablets; Merrifield with increased performance and battery life; and finally Avoton that provides 64-bit energy efficiency for micro servers and boasts ECC, Intel VT and possibly vPro and other security features. Avoton will go head-to-head with ARM in the data center where Intel can’t afford to lose any ground.

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from "Device to Data Center", says Intel. (Courtesy: Intel.)

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from “Device to Data Center”, says Intel. (Courtesy: Intel.)

Oh Yeah? Who’s Faster Now?

As Intel steps up its game because it has to win or else, the competition is not sitting still. ARM licensees have begun shipping big.LITTLE SoCs, and the company has announced new graphics, DSP, and mid-range cores. (Read Jeff Bier and BDTi’s excellent recent ARM roadmap overview here.)

A recent report by ABI Research (June 2013) tantalized (or more appropriately galvanized) the embedded and smartphone markets with the headline “Intel Apps Processor Outperforms NVIDA, Qualcomm, Samsung”. In comparison tests, ABI Research VP of engineering Jim Mielke noted that that Intel Atom Z2580  ”not only outperformed the competition in performance but it did so with up to half the current drain.”

The embedded market didn’t necessarily agree with the results, and UBM Tech/EETimes published extensive readers’ comments with colorful opinions.  On a more objective note, Qualcomm launched its own salvo as we went to press, predicting “you’ll see a whole bunch of tablets based upon the Snapdragon 800 in the market this year,” said Raj Talluri, SVP at Qualcomm, as reported by Bloomberg Businessweek.

Qualcomm  has made its Snapdragon product line more user-friendly and appears to be readying the line for general embedded market sales in Snapdragon 200, 400, 600, and “premium” 800 SKU versions. The company has made available development tools (mydragonboard.org/dev-tools) and is selling COM-like Dragonboard modules through partners such as Intrinsyc.

Intel Still Inside

It’s looking like a sure thing that Intel will finally have competitive silicon to challenge ARM-based SoCs in the market that really matters: mobile, portable, and handheld. 22nm Atom offerings are getting power-competitive, and the game will change to an overall system integration and software efficiency exercise.

Intel has for the past five years been emphasizing a holistic all-system view of power and performance. Their work with Microsoft has wrung out inefficiencies in Windows and capitalizes on microarchitecture advantages in desktop Ivy Bridge and Haswell CPUs. Security is becoming important in all markets, and Intel is already there with built-in hardware, firmware, and software (through McAfee and Wind River) advantages. So too has the company radically improved graphics performance in Haswell and Clover Trail+ Atom SoCs…maybe not to the level of AMD’s APUs, but absolutely competitive with most ARM-based competitors.

And finally, Intel has hedged its bets in Android and HTML5. They are on record as writing more Android code (for and with Google) than any other company, and they’ve migrated past MeeGo failures to the might-be-successful HTML5-based Tizen OS which Samsung is using in select handsets.

As I’ve said many times, Intel may be slow to get it…but it’s never good to bet against them in the long run. We’ll have to see how this plays out.

PCI-SIG “nificant” Changes Brewing in Mobile

PCI-SIG Developers Conference, June 25, 2013, Santa Clara, CA

Of five significant PCI Express announcements made at this week’s PCI-SIG Developers Conference, two are aimed at mobile embedded.

From PCI to PCI Express to Gen3 speeds, the PCI-SIG is one industry consortium that lets no grass grow for long. As the embedded, enterprise and server industries roll out PCIe Gen3 and 40G/100G Ethernet, the PCI-SIG and its key constituents like Cadence, Synopsis, LeCroy and others are readying for another speed doubling to 16 GT/s (giga transfers/second) by 2015. The PCIe 4.0 next step evolves bandwidth to 16Gb/s or a whopping 64 GB/s (big “B”) total lane bandwidth in x16 width. PCIe 4.0 Rev 0.5 will be available Q1 2014 with Rev 0.9 targeted for Q1 2015.

Table of major PCI-SIG announcements at Developers Conference 2013

Table of major PCI-SIG announcements at Developers Conference 2013

Yet as “SIG-nificant” as this announcement is, PCI-SIG president Al Yanes said it’s only one of five major news items. The others include: a PCIe 3.1 specification that consolidates a series of ECNs in the areas of power, performance and functionality; PCIe Outside the Box which uses a 1-3 meter “really cheap” copper cable called PCIe OCuLink with an 8G bit rate; plus two embedded and mobile announcements that I’m particularly enthused about. Refer to the table for a snapshot.

New M.2 Specification

The new M.2 specification is a small, mobile embedded form factor designed to replace the previous “Mini PCI” in Mini Card and Half Mini Card sizes. The newer, as-yet-publicly-unreleased M.2 card will be smaller in size and volume but is intended to provide scalable PCIe performance to allow designers to tune SWaP and I/O requirements. PCI-SIG marketing workgroup chair Ramin Neshati told me that M.2 is part of the PCI-SIG’s increased focus on mobile.

The scalable M.2 card is designed as an I/O plug in for Bluetooth, Wi-Fi, WAN/cellular, SSD and other connectivity in platforms including ultrabook, tablet, and “maybe even smartphone,” said Neshati. At Rev 0.7 now, Rev 0.9 will be released soon and the final (Rev 1.0?) spec will become public by Q4 2013.

PCI-SIG M.2 card form factor

The PCI-SIG’s impending M.2 form factor is designed for mobile embedded ultrabooks, tablets, and possibly smartphones. The card will have a scalable PCIe interface and is designed for Wi-Fi, Bluetooth, cellular, SSD and more. (Courtesy: PCI-SIG.)

Mobile PCIe (M-PCIe)

Seeing the momentum in mobile and the interest in a PCIe on-board interconnect lead the PCI-SIG to work with the MIPI Alliance and create Mobile PCI Express: M-PCIe. The specification is now available to PCI-SIG members and creates an “adapted PCIe architecture” bridge between regular PCIe and MIPI M-PHY.

The Mobile PCI Express (M-PCIe) specification targets mobile embedded devices like smartphones to provide high-speed, on-board PCIe connectivity. (Courtesy: PCI-SIG.)

The Mobile PCI Express (M-PCIe) specification targets mobile embedded devices like smartphones to provide high-speed, on-board PCIe connectivity. (Courtesy: PCI-SIG.)

Using the MIPI M-PHY physical layer allows smartphone and mobile designers to stick with one consistent user interface across multiple platforms, including already-existing OS drivers. PCIe support is “baked into Windows, iOS, Android,” and others, says PCI-SIG’s Neshati.  PCI Express also has a major advantage when it comes to interoperability testing, which runs from the protocol stack all the way down to the electrical interfaces. Taken collectively, PCIe brings huge functionality and compliance benefits to the mobile space.

M-PCIe supports MIPI’s Gear 1 (1.25-1.45 Gbps), Gear 2 (2.5-2.9 Gbps) and Gear 3 (5.0-5.8 Gbps) speeds. As well, the M-PCIe spec provides power optimization for short channel mobile platforms, primarily aimed at WWAN front end radios, modem IP blocks, and possibly replacing MIPI’s own universal file storage UFS mass storage interface (administered by JEDEC).

M-PCIe by the PCI-SIG can be used in multiple high speed paths in a smartphone mobile device. (Courtesy: PCI-SIG and MIPI Alliance.)

M-PCIe by the PCI-SIG can be used in multiple high speed paths in a smartphone mobile device. (Courtesy: PCI-SIG and MIPI Alliance.)

PCI Express Ready for More

More information on these five announcements will be rolling out soon. But it’s clear that the PCI-SIG sees mobile and embedded as the next target areas for PCI Express in the post-PC era, while still not abandoning the standard’s bread and butter in PCs and high-end/high-performance servers.

 

HTML5 Is What’s Needed To Rapidly Develop IVI Automotive Apps

HTML5 logo

Car manufacturers know that in-car technology like navigation systems sells cars. The pace of the smartphone movement is impacting the painfully slow speed with which automotive manufacturers develop new cars and tech features. Consumers trade out their phones every 2 years, but a two year old car is still considered nearly “new” by Kelly Blue Book. So how can the auto OEMs satisfy consumers’ tastes for updated, red-hot in-vehicle infotainment (IVI) systems and add-on Apps?

Elektrobit speaks about HTML5, IVI, and HMI for automotive markets

Automotive software supplier Elektrobit thinks HTML5 is the answer. Coincidentally, so does RIM’s QNX division, along with Intel.  QNX supplies “CAR 2″ software to several auto OEMs, and Intel is behind Tizen, an HTML5-based competitor to Android.  While Samsung has endorsed Tizen for a handful of smartphones, Intel has publicly stated that Tizen is also targeting automotive IVI systems as I wrote about here.

At a webinar today (5 March 2013) hosted by Automotive World magazine, Elektrobit’s VP of Automotive Rainer Holve, argued that HTML5 is the perfect language in which to develop and deploy the fast-changing IVI HMI software. Most importantly, the car’s core “native” IVI functions should stay separate and subject to safety-critical coding practices.

By partitioning the IVI software in this manner, the two ecosystems are decoupled and can run on their own market- and OEM-driven schedules.  This means that native IVI–like GPS navigation, audio, HVAC, or OBDII diagnostic information like fuel consumption–can be developed slowly and methodically on the typical 2-5+ year automobile OEM cycle.

But the faster moving, consumer smartphone inspired IVI portion, and its fast moving add-on Apps ecosystem, can move very, very quickly. This allows consumers to refresh not only the Apps, but alows the OEMs to upgrade the entire HMI experience every few years without having to replace the whole car.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

HTML5 decouples the slow automotive dev cycle, from the super-fast IVI App cycle.

While the OEMs would love for an HMI refresh to force the consumer to replace the car every two years, it’s not going to happen. HMTL5 is a reasonable alternative and they know it. According to Elektrobit, Chrysler, GM, and Jaguar/Land Rover (JLR) have already started projects with HTML5.

HTML5 is an “evolution and cleanup of previous HTML standards,” said Elektrobit’s Holve, and is composed of HTML+CSS+JavaScript, along with new features for A/V, 2D graphics canvas, a 3D API, support for hardware acceleration, and much more.  HTML5 is based upon open standards and is supported by Web Hypertext Application Technology Working Group (WHATWG) and the World Wide Web Consortium (W3C). Independently, W3C is working on a standardized API for JavaScript, which makes the HTML5 value proposition even sweeter.

Besides decoupling the HMI software from the “core” HMI functions, HTML5 would allow third-party Apps developers to swiftly write and deploy applications for IVI systems. Besides Internet connectivity itself, this is the one IVI feature that consumers demand: a choice of what Apps to add whenever they so choose. And since every automobile OEM will have to certify an App for safe in-vehicle use with their particular system, HTML5 allows App developers to create one core App that can be easily modified for multiple manufacturers and their myriad (and differentiated) vehicle models.  In short: HTML5 makes things easier for everyone, yet still allows a robust third-party market to flourish.

It’s important to note how this is both similar to, and differs from, the current IVI strategy of many OEMs that rely solely on the smartphone for Apps. Chevrolet, Peugeot, Renault, Toyota and v others tether the smartphone to the IVI system and “mirror” the phone’s Apps on the screen (see my blog on Mirroring). This allows the wildly robust iOS and Android App ecosystems into the car (and soon RIM/Blackberry and Windows 8 Phone), but it comes at a price.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

In this scenario, the auto OEM must certify every App individually for use in their vehicle to assure safety or that critical car systems can’t be hacked or compromised. Or, the OEM can allow all Apps to run and hope for the best. One hopes a rogue App doesn’t access the CAN bus and apply the ABS or electric steering.

HTML5, on the other hand, gently forces developers to create Apps destined for IVI systems, but adds only a slight burden on them to make minor changes for each manufacturer’s certification. In this way they’re not barred from the car indiscriminately, but can develop a business of IVI apps separate from their smartphone iOS, Android and other Apps.

Intel's Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Intel’s Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

Will HTML5 be successful? Is it the right answer for the rabid consumer’s taste for car tech, while still giving the auto manufacturer the safety and security they’re required to offer by law? I was skeptical about Tizen until Samsung’s announcements at Mobile World Congress 2013 last month. With Tizen pushing HTML5 for “openness”, it may just gain traction in automotive, too.

Watch this space. We’ll keep you updated.

“Mirror, Mira” on the Car’s IVI Screen: Two Different Standards?

You might be hearing about a new technology called MirrorLink that mimics your smartphone’s screen on the larger nav screen in your “connected car”. Or, you might be following the news on Miracast, a more open standard now baked into Android that offers Apple AirPlay-like features to stream smartphone content to devices like connected TVs.

You’d be forgiven if you think the two similarly-named standards are trying to accomplish the same thing. I didn’t understand it either, so I did some digging. Here’s what I found out.

The Smart, Connected Car
When I attended the Paris Auto Show last Fall specifically to investigate in-vehicle infotainment (IVI) trends for the Barr Group under contract to Intel, I got spun up “right quick” on all manner of IVI. From BMW’s iDrive to Chevrolet’s MyLink, the connected car is here. In fact, it’s one of the biggest trends spotted at last week’s 2013 CES in Las Vegas. MirrorLink is being designed into lots of new cars.

BMW's iDrive IVI uses a native system and doesn't rely on smartphone mirroring.

BMW’s iDrive IVI uses a native system and doesn’t rely on smartphone mirroring. (Courtesy of BMW.)

The biggest question faced by every auto manufacturer is this: in-car native system, or rely on the apps in one’s smartphone? Ford’s industry breakthrough MyFord Touch with SYNC by Microsoft is native and based upon Microsoft Auto Platform (now called Windows Embedded Automotive 7). Elsewhere, premium brands like BMW, Lexus and Cadillac have designed self-contained systems from the ground up. Some, like BMW, include in-car cellular modems. Others rely on the smartphone only for music and Internet access, but that’s it.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps. (Courtesy of Chevrolet.)

Still others, like Toyota and Chevrolet use a technology called MirrorLink to “mirror” the smartphone’s screen onto the car’s larger IVI. For all apps that make sense to be viewed on the IVI, the system will display them — usually identically to what the user sees on the smartphone (subject to safety and distraction caveats).

MirrorLink is now a trademarked standard owned by the Car Connectivity Consortium that’s designed specifically for cars and smartphones. That means the standard worries about driver distractions, apps that make sense for drivers (such as Google Maps) and those that don’t (such as a panoramic camera stitching application). Apps have to be qualified for use with MirrorLink.

As well, MirrorLink replaces the phone’s touch I/O with in-car I/O such as steering wheel controls, console joysticks, or the IVI head unit’s touchscreen or bezel buttons. Equally as important, audio input from microphones is routed from the car to the phone, while output uses the car’s speakers. The car’s antennae for radio and GPS will be given preference over the phone’s, improving the signal reception.  The protocols between smartphone and car also take input from the vehicle’s CANbus, including speed. This means that you can check your email when parked, but not while driving. A great resource for how it works and what the future holds is here.

MirrorLink started as a Nokia idea that was intended for smartphone-to-car connectivity. Now at version 1.1, it’s a client-server architecture where the IVI head unit is the USB host.  It uses industry-standard protocols such as Internet Protocol (IP), USB, Wi-Fi, Bluetooth (BT HFP for telephony, BT A2DP for media), RTP, and UPnP. Recent additions use The Trusted Computing Group concepts of device attestation protocols with SKSD/PKSD keys via authentication. The actual screen sharing uses the VNC protocol.

MirrorLink and Trusted Computing Group authentication process for trusted content.

MirrorLink and Trusted Computing Group authentication process for trusted content. (Courtesy of Car Connectivity Consortium.)

What MirrorLink doesn’t yet support is video streaming, since drivers watching video is a no-no is cars (tell that to the Japanese who I’ve seen with TVs mounted in their cars!).

Android and Miracast
Miracast, on the other hand, is all about streaming. It’s a Wi-Fi Alliance spec recently demoed at CES 2013 that’s designed to stream video and photos from smartphones, tablets, and future embedded devices. Like Apple’s AirPlay, it moves stuff from a small screen onto a big TV screen. It’s based upon Wi-Fi’s not-new-but-rarely-used Wi-Fi Direct standard (WiDi 3.5) that avoids routers to establish peer-to-peer connectivity.

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

Miracast supports 1080p HD video, 5.1 surround, and CPUs from nVidia, TI, Qualcomm, Marvell and others have announced plans to support it. Built into the spec is the ability to stream DRM and HDCP protected content using already established HDMI and DisplayPort style copy protection schemes. I guess they figure if you’ve got the rights to play it on your phone, might as well play it on your TV too.

Last Fall, Google updated Android Jelly Bean to 4.2 and included Miracast as part of the update, and I’m thrilled that my Nexus 7 tablet can now, in theory, stream content to my Samsung Smart TV. As Android proliferates throughout the embedded market, I can envision commercial applications where a user might do more than stream a video to another embedded device. Sharing the entire smartphone’s screen can be useful for PowerPoint presentations or demoing just about any Android app in existence. If it’s on the phone’s screen, it can get mirrored via Wi-Fi to another screen.

Will MirrorLink and Miracast Converge?
I doubt the two standards will merge. MirrorLink is exclusively aimed at IVI systems in cars, and the closely curated standard is intended to vet applications to assure safe operation in a vehicle. Miracast is similar in that it mirrors a smartphone’s screen, but there are no limitations on moving between screens, so Miracast is clearly the superset standard to a broader market.

Ironically, as the Car Connectivity Consortium looks to release MirrorLink Version 2.0, they’re examining Miracast as a way to provide an “alternative video link” for streaming H.264 1080p@30 FPS into the car cabin.

Why? For passenger entertainment. Think about minivans (shudder) and Suburbans loaded with kids.