How a Space Warp Technique Helps Gamers



At CES in January, AMD acknowledged the strength of the mobile market and announced that it will add a Vega-architecture-based Graphics Processor Unit (GPU).

In Las Vegas, details were tantalizingly scant, but a few weeks later, Nick Pandher, Director of Market Development for Radeon Professional Graphics, and Scott Wasson, Senior Manager of Technical Marketing, AMD, were happy to flesh out some more of the details in a conversation with EECatalog.

Vega Mobile is built to be small and consume relatively low power, but one of the benchmarks for performance is to meet the bar for Virtual Reality (VR) solutions with a standard level of performance.

Like other Vega discrete offerings for the company, the low-power Radeon Vega GPU uses HBM2 memory technology to reduce the size and thickness of the GPU. HBM2 is the next generation of the High Bandwidth Memory (HBM) standard, where memory dies are stacked to increase the density. “It is based on Vega GPU architecture, introduced last year in bigger chips,” says Scott Wasson. “This is the smaller version, where we are taking the graphics architecture and offering it across products.” Its slim profile and high performance targets thin and light notebooks.

Meeting Mobile Size Demands
The size has been significantly reduced compared with earlier generations, such as the RX850, which would have multiple GDDR memory chips ringed around it, Wasson points out. HBM2 reduces the chip size in a laptop from the size, length by width, of two decks of playing cards, to the size of a single sugar packet, he says.

Figure 1: AMD CEO, Dr. Lisa Su, holds the latest Vega GPU.

HBM2 also allows for more capacity and more bandwidth in a smaller footprint and at lower power, “as everything is very close together,” explains Wasson. The wide, internal memory interface of HBM2 means that it does not have to run at high clock speeds, yet it can provide a lot of bandwidth, he continues. For Wasson, “The switching performance, frankly, is the clock speed more than power saving.” Its structure means that HBM2 has “some nice power saving properties compared to GDDR5,” the other industry standard, he says.

The drive for size reduction responds to some of the challenges mobile gaming poses. “Gaming workloads are one of the things that drive development of PCs over time, as they use a lot of power for the graphics chips and the Central Processing Unit (CPU); they have to work together,” says Wasson. Trying to fit the kind of speed and performance prowess that processors use to create a more compelling gaming experience into a mobile form factor can present power consumption problems. Other challenges or considerations are: how does the power consumption affect battery life; and how much weight and space will the battery and processor occupy.

Another consideration is thermal management. “The cooling solution that will evaporate heat generated by the chip is directly tied to the power consumption in the chip,” notes Wasson.

Vega Mobile is built to be small and consume relatively low power, but one of the benchmarks for performance is to meet the bar for Virtual Reality (VR) solutions with a standard level of performance. Wasson points out that the Vega Mobile GPU is not VR-ready just yet, but he anticipates that partners will make announcements later this year around VR.

Virtual Reality Complexities
VR gaming products are already shipping. In November 2017, analyst Canalys reported that one million VR headsets were shipped globally in Q3 alone, setting a benchmark for the industry. Another research company, Statistica, predicts that global shipments of VR devices will increase from 2017’s 3.7 million to five million, headed by Sony with two million units, followed by Facebook’s Oculus (one million), HTC Vive, Microsoft, and other players.

A particular challenge to be address by GPUs in VR applications is to provide visual feedback whenever the gamer moves his or her head quickly—it can cause nausea. The window of time to avoid a sense of vertigo or nausea is about 20 milliseconds, says Wasson. AMD worked with Oculus and HTC to create asynchronous time warp, asynchronous space warp, and asynchronous reprojection—all descriptions for creating a low-latency response and delaying telemetry from the headset to the back end of the system and moving the position of the frame. This requires scheduling hardware into AMD software and building software to support the interruption of work and the interjection of new work that can provide a quick update. It also prioritizes work. Wasson describes it thus: “I am working on x, but in order to work on a perception problem with the user, I need to switch and work on y. And I am going to shift quickly to that operation, get the frame out, and then switch back.” For the sophisticated operation of interrupting a schedule, Wasson says AMD must use architectural features that were built in ahead of time, in new ways.

 

Figure 2: VR graphics need techniques like asynchronous space warp to avoid nausea and vertigo. (Image credit – HTC Vibe.)

The next step from gaming, is to use high-performance, low-power GPUs in areas outside of the gaming arena. “What we are seeing today is more extensive frameworks,” says Nick Pandher.

“We had OpenCL to introduce an open view to programming a GPU that was vendor-agnostic. What we are starting to develop is taking these next generation frameworks built around specific deep neural networks or deep learning use cases and adapting these to be very high performance on a GPU.” Deep learning frameworks (for example TensorFlow, Caffe, and Torch) are areas of interest. “TensorFlow is a big interest for Vega type architectures in compute use cases outside of the gaming space,” he says. In autonomous driving for example, algorithms can look at elements in an image and the GPU can dissect them in real time to provide a view.

“We see a variety of people using Vega class GPUs to analyze Gbytes of data and use that to improve an algorithm. . . An algorithm can become indicative of how a user is driving, and the algorithm should adapt,” he proposes. Pandher also suggests that as higher capability is introduced, people want to keep frameworks on a CPU and move select parts of an algorithm to GPUs to leverage higher performance. This way, the 20-plus years of investment in CPU-based algorithms can be leveraged.

“We are barely touching the tip of the iceberg for deep learning,” says Pandher. “As capabilities on the GPU side become more enriched with performance, we will see more of these experimental areas turn into proper, defined products as well,” he predicts.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

Tags:

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.