Tighter Budgets Are Better for this Systems Supplier




General Micro Systems (GMS) has evolved from VME SBCs to rugged bespoke “shoebox” systems for military vehicles.

Ben Sharfi, founder of General Micro Systems in Southern California, is a well-known colorful personality in the VME industry. Known for his “That’s ridiculous!” outbursts at VITA trade meetings and not-so-subtle advertisements poking his competitors, he started building VME SBCs in 1979—practically coincident with the introduction of the 6U form factor. Today, GMS still makes SBCs but is finding more traction in building very integrated rugged small form-factor systems.

Edited excerpts follow.

Chris Ciufo, EECatalog: Ben, we go back a long way. Give us a very brief history of GMS and explain where the company is today.

Ben Sharfi, GMS: We incorporated in 1979 and shipped our first VME board in 1980, right at the same time both Force Computer and Motorola introduced theirs, making us the third in the new market. Our part numbers were “V01” then; now we’re shipping “V399” and up. So we’re a few part numbers later than those early boards. Today, we’re doing lots of rugged systems and many of them are for the Army dealing with some real issues and problems they’re having.

EECatalog: What kinds of issues is the Army facing?

Sharfi: Cross-domain, multi-domain and security are big issues. The Army assigns certain [computer and data access] privileges based upon domains such as SIPRNet and NIPRNet [Secret Internet Protocol Router Network; Nonsecure IP Router Network] and others, depending upon the mission. The issue is that every domain requires a separate, segregated system, which can get big and needlessly redundant in vehicles and mobile platforms, given the state of today’s technology and processing power.

There’s a big movement to address the size and weight problem using multi-domain, which is one system that can address two or more domains. Additionally, there’s cross-domain where one system can talk to another domain within the same environment. This is the traditional partitioned “red/black” NSA architecture that the military has used for a while.

We were hired by the Army to create a multi-domain system that runs red/black within the same box. The Army plans to install this on a variety of vehicles to save space and weight, and our box will be running SIPR and NIPR simultaneously in one small box. Also, we’re offering a cross-domain architecture that provides a secure networking switch. This is possible because of our RuggedCool technology and the great capabilities in some of the latest processors from Intel and others.

EECatalog: Before we dive into ICs, let’s talk about VME where it all started for GMS. Your website shows lots of systems—what about VME and VPX?

Sharfi: Our commitment to VME is second to none, and we’ve also got a whole bunch of VPX boards, too. We still do a lot of community work through VITA and are committed to these specs because we have to be. But one of the problems with [VPX] is that it’s crazy expensive and crazy big. And even worse, despite everyone’s best efforts, there’s really no standard.

In the old days, you could plug VME boards from two vendors into the backplane and they would interoperate. The same was true of CompactPCI. These boards would ‘play with’ other boards. The OpenVPX standard has in some ways actually made this worse, even with the predefined profiles. Now anyone can add any pins they want and create any kind of format.

EECatalog: Ben, you’re going to have to explain that because this is totally different from my understanding of VPX and OpenVPX. You’re saying that a VITA 65 OpenVPX board from Curtiss-Wright might not be interoperable with a GE Intelligent Platforms board?

Sharfi: Absolutely not! The [VITA] specs opened up the pinouts to allow vendors a great deal of flexibility…to the point of not interoperability. The issue [is often] PCI Express, and I have to match my PCI Express lanes to them. This is the fault of the standard, because if you look at the [personal computer] PCIe cards on the market you’ll only see two choices: x1 and x16. The x8 is a version of the x16 format. And these cards will work in any PC.
But that’s not true in OpenVPX: you can have two x2, two x4, eight x1, three x4, and any way that I want to, because the pinout is open. I’ve talked to [VITA] about this and the problem is that this is run by committee and we no longer have the VME plugfests like we used to. They were designed to see who runs with each other’s board, and we’d write up the issues and they’d get fixed to assure interoperability. Any vendor could work [interoperate] with anybody. Right now if we did the same test, nobody’s going to work with anybody.

EECatalog: But VPX—and OpenVPX—is a growing market with many designs and the DoD is definitely deploying systems. How is this issue being solved?

Sharfi: From VITA’s perspective, there are multiple configurations that specify how the boards are designed to interoperate, and it’s up to the user to decide which configuration to choose and assure that the chosen vendors adhere to it. Here’s why this doesn’t work in practice. The graphics board, for example, may be a x16 [Editor’s note: this is common for graphics boards.] But the [Intel] quad core Haswell CPUs that we’re using are not going to support x16 without a PCI Express fanout switch.

The solution is like what Apple does: dynamic, configurable PCI Express that changes the lanes around when cards are plugged into the desktop Mac. This uses a PCIe switch that configures to whatever the host processor wants and gives the user more options. We’ve done this very thing on our 3U and 6U VPX cards using a PLX PCI Express switch based upon user input.

EECatalog: What’s new in the area of rugged shoeboxes that don’t use VME or VPX?

Sharfi: The Army’s WIN-T program is alive and well, and GMS is the exclusive supplier of multi-domain boxes in Bradley, MRAP, Stryker, and all of the program’s [6] ground vehicles. We’ve been doing this for a while, supplying four boxes per vehicle. Based upon that, we’re seeing a need for night vision in MRAPs in a program named—you guessed it—Night Vision.

Eventually replacing the HMMWV, MRAP can carry equipment and troops in all terrain conditions including total darkness. The current configuration calls for using six “Rover” L3 day/night cameras to provide the driver and support personnel with a 360-degree view around the vehicle. As you can guess, six real-time cameras fusing data to a single screen with under 1 frame of latency [30 ms] is a massive computational task.
The Army looked at many different solutions, including VPX, rackmounted equipment, and regular servers. The problems with these were cost, space, and most importantly, heat. Don’t forget that this is a very rugged environment. Tough challenge.

EECatalog: I’ll bet this most recent PR you sent me for the 4:1 “Tarantula” shoebox was your answer, right?

Sharfi: You got it. It’s literally four systems combined into one system. It’s smaller than a shoebox and replaces what would have taken four 2U chassis in the past. It’s based upon Intel’s Xeon Haswell EP processor with 10 cores running at 2.4 GHz each, with 12 cores shipping sometime soon. It’s literally the fastest embedded server class product out there. We add 512-bit wide memory to get extreme memory speed to talk to one of up to six hardware secure virtual machines—do you see the relationship to the multi-domain product I described earlier?

We add an analog to digital camera feed converter, plus a hardware video switch matrix to pipe in the data. The camera data is converted a lossless open standard called “GigE Vision”, then fed to a switch. This intelligent video switch is based upon Vitesse 24 Gbit Ethernet plus two 10 Gbit Ethernet ports. This is a Cisco-like switch but it’s embedded inside this rugged shoebox.

The multiplexed video is then sent to the processors—all in under a frame cycle. Finally, we created removable canisters that can store up to 32 TB of SSD storage to keep the data for post analysis. Our storage solution replaced separate RAID controllers and storage—and it’s all in our rugged shoebox! [Editor’s note: the block diagram for this high density box is shown in Figure 1; the box itself in Figure 2.]

fig1
Figure 1. Block diagram for the MRAP Night Vision rugged shoebox.

fig2
Figure 2. The GMS “Tarantula” Xeon-based 6-VM rugged chassis.

EECatalog: The creativity of these boxes is impressive. What’s the secret?

Sharfi: There are several things we do that make this possible. First, our systems are very highly and tightly integrated. You couldn’t stuff this much in without some serious engineering. And secondly, the competition for our products is often OpenVPX, backplanes, and modular chassis and power supplies. But our customers sometimes don’t care about a modular, open standards approach. They’re dealing with a box-level LRU [line replacement unit]. This allows us to bring our best technologists to solving program requirements.

The tighter the budgets are, the more successful we seem to be.


ciufo_chrisChris A. Ciufo is editor-in-chief for embedded content at Extension Media, which includes the EECatalog print and digital publications and website, Embedded Intel® Solutions, and other related blogs and embedded channels. He has 29 years of embedded technology experience, and has degrees in electrical engineering, and in materials science, emphasizing solid state physics. He can be reached at
cciufo@extensionmedia.com.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis

Tags: ,