Expanding DirectX 12: Microsoft Announces DirectX Raytracing

To many out there it may seem like DirectX 12 is still a brand-new technology – and in some ways it still is – but in fact we’ve now been talking about the graphics API for the better part of half a decade. Microsoft first announced the then-next generation graphics API to much fanfare back at GDC 2014, with the initial iteration shipping as part of Windows 10 a year later. For a multitude of reasons DirectX 12 adoption is still in its early days – software dev cycles are long and OS adoption cycles are longer still – but with their low-level graphics API firmly in place, Microsoft’s DirectX teams are already hard at work on the next generation of graphics technology. And now, as we can finally reveal, the future of DirectX is going to include a significant focus on raytracing.

This morning at GDC 2018 as part of a coordinated release with some of their hardware and software partners, Microsoft is announcing a major new feature addition to the DirectX 12 graphics API: DirectX Raytracing. Exactly what the name says on the tin, DirectX Raytracing will provide a standard API for hardware and software accelerated ray tracing under DirectX, allowing developers to tap into the rendering model for newer and more accurate graphics and effects.

Going hand-in-hand with both new and existing hardware, the DXR command set is meant to provide a standardized means for developers to implement ray tracing in a GPU-friendly manner. Furthermore as an extension of the existing DirectX 12 feature set, DXR is meant to be tightly integrated with traditional rasterization, allowing developers to mix the two rendering techniques to suit their needs and to use the rendering technique that delivers the best effects/best performance as necessary.

Why Ray Tracing Lights the Future

Historically, ray tracing and its close colleague path tracing have in most respects been the superior rendering techniques. By rendering a scene more like the human eye works – by focusing on where rays of light come from, what they interact with, and how they interact with those objects – it can produce a far more accurate image overall, especially when it comes to lighting in all of its forms. Specifically, ray tracing works like human vision in reverse (in a manner of speaking), casting rays out from the viewer to objects and then bouncing from those objects to the rest of the world, ultimately determining the interactions between light sources and objects in a realistic manner. As a result, ray tracing has been the go-to method for high quality rendering, particularly static images, movies, and even pre-baked game assets.


Ray Tracing Diagram (Henrik / CC BY-SA 4.0)

However the computational costs of photorealistic ray tracing are incredible due to all of the work required to not only trace individual rays, but also the sheer number of them. This is a ray for every screen pixel (or more) cast, reflected, refracted, and ultimately recursively generated many times over. Bouncing from object to object, refracting through objects, diffusing along other objects, all to determine all of the light and color values that ultimately influence a single pixel.


An illustration of ray recursion in a scene

As a consequence of this, ray tracing has not been suitable for real-time rendering, limiting its use to “offline” use cases where systems can take as much time as they need. Instead, real-time graphics has been built around rasterization, a beautiful, crass hack that fundamentally projects 3D space on to a 2D plane. By reducing much of the rendering process to a 2D image, this greatly simplifies the total workload, making real-time rendering practical. The downside to this method is, as one might expect, that it’s not as high quality; instead of accurate light simulations, pixel & compute shaders provide approximations of varying quality. And ultimately shaders can’t entirely make up for the lack of end-to-end 3D processing and simulations.

While practical considerations mean that rasterization has – and will continue to be – the dominant real-time rendering technique for many years to come, the holy grail of real-time graphics is still ray tracing, or at least the quality it can provide. As a result, there’s been an increasing amount of focus on merging ray tracing with rasterization in order to combine the strengths of both rendering techniques. This means pairing rasterization’s efficiency and existing development pipeline with the accuracy of ray tracing.

While just how to best do that is going to be up to developers on a game-by-game basis, the most straightforward method is to rasterize a scene and then use ray tracing to light it, following that up with another round of pixel shaders to better integrate the two and add any final effects. This leverages ray tracing’s greatest strengths with lighting and shadowing, allowing for very accurate lighting solutions that properly simulate light reflections, diffusion, scattering, ambient occlusion, and shadows. Or to put this another way: faking realistic lighting in rasterization is getting to be so expensive that it may just as well be easier to do it the right way to begin with.

Enter DirectX Raytracing

DirectX Raytracing then is Microsoft laying the groundwork to make this practical by creating an API for ray tracing that works with the company’s existing rasterization APIs. Technically speaking GPUs are already generic enough that today developers could implement a form of ray tracing just through shaders, however doing so would miss out on the opportunity to tap into specialized GPU hardware units to help with the task, not to mention the entire process being non-standard. So both to expose new hardware capabilities and abstract some of the optimization work around this process to GPU vendors, instead this functionality is being implemented through new API commands for DirectX 12.

But like Microsoft’s other DirectX APIs it’s important to note that the company isn’t defining how the hardware should work, only that the hardware needs to support certain features. Past that, it’s up to the individual hardware vendors to create their own backends for executing DXR commands. As a result – and especially as this is so early – everyone from Microsoft to hardware vendors are being intentionally vague about how hardware acceleration is going to work.

At the base level, DXR will have a full fallback layer for working on existing DirectX 12 hardware. As Microsoft’s announcement is aimed at software developers, they’re pitching the fallback layer as a way for developers to get started today on using DXR. It’s not the fastest option, but it lets developers immediately try out the API and begin writing software to take advantage of it while everyone waits for newer hardware to become more prevalent. However the fallback layer is not limited to just developers – it’s also a catch-all to ensure that all DirectX 12 hardware can support ray tracing – and talking with hardware developers it sounds like some game studios may try to include DXR-driven effects as soon as late this year, if only as an early technical showcase to demonstrate what DXR can do.

In the case of hitting the fallback layer, DXR will be executed via DirectCompute compute shaders, which are already supported on all DX12 GPUs. On the whole GPUs are not great at ray tracing, but they’re not half-bad either. As GPUs have become more flexible they’ve become easier to map to ray tracing, and there are already a number of professional solutions that can use GPU farms for ray tracing. Faster still, of course, is mixing that with optimized hardware paths, and this is where hardware acceleration comes in.

Microsoft isn’t saying just what hardware acceleration of DXR will involve, and the high-level nature of the API means that it’s rather easy for hardware vendors to mix hardware and software stages as necessary. This means that it’s up to GPU vendors to provide the execution backends for DXR and to make DXR run as efficiently as possible on their various microarchitectures.  When it comes to implementing those backends in turn, there are some parts of the ray tracing process that can be done in fixed-function hardware more efficiently than can be done shaders, and as a result Microsoft is giving GPU vendors the means to accelerate DXR with this hardware in order to further close the performance gap between ray tracing and rasterization.

DirectX Raytracing Planned Support
VendorSupport
AMDIndeterminate - Driver Due Soon
NVIDIA VoltaHardware + Software (RTX)
NVIDIA Pre-VoltaSoftware

For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option. Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

Though ultimately, the idea of hardware acceleration may be a (relatively) short-lived one. Since the introduction of DirectX 12, Microsoft’s long-term vision – and indeed the GPU industry’s overall vision – has been for GPUs to become increasingly general-purpose, with successive generations of GPUs moving farther and farther in this direction. As a result there is talk of GPUs doing away with fixed-function units entirely, and while this kind of thinking has admittedly burnt vendors before (Intel Larrabee), it’s not unfounded. Greater programmability will make it even easier to mix rasterization and ray tracing, and farther in the future still it could lay the groundwork for pure ray tracing in games.

Unsurprisingly then, the actual DXR commands for DX12 are very much designed for a highly programmable GPU. While I won’t get into programming minutiae better served by Microsoft’s dev blog, Microsoft’s eye is solidly on the future. DXR will not introduce any new execution engines in the DX12 model – so the primary two engines remain the graphics (3D) and compute engines – and indeed Microsoft is treating DXR as a compute task, meaning it can be run on top of either engine. Meanwhile DXR will introduce multiple new shader types to handle ray processing, including ray-generation, closest-hit, any-hit, and miss shaders. Finally, the 3D world itself will be described using what Microsoft is terming the acceleration structure, which is a full 3D environment that has been optimized for GPU traversal.

Eyes on the Future

Like the announcement of DirectX 12 itself back in 2014, today’s announcement of DirectX Raytracing is meant to set the stage for the future for Microsoft and its hardware and software partners. Interested developers can get started with DXR today by enabling Win10 FCU’s experimental mode. Meanwhile top-tier software developers like Epic Games, Futuremark, DICE, Unity, and Electronic Arts’ SEED group are already announcing that they plan to integrate DXR support into their engines. And, as Microsoft promises, there are more groups yet to come.


Project PICA PICA from SEED, Electronic Arts

Though even with the roughly one year head start that Microsoft’s closest developers have received, my impression from all of this that DXR is still a very long-term project. Perhaps even more so than DirectX 12. While DX12 was a new API for existing hardware functions, DXR is closer to a traditional DirectX release in that it’s a new API (or rather new DX12 commands) that go best with new hardware. And as there’s essentially 0 consumer hardware on the market right now that offers hardware DXR acceleration, that means DXR really is starting from the beginning.

The big question I suppose is just how useful the pure software fallback mode will be; if there’s anything that can meaningfully be done on even today’s high-end video cards without fixed-function hardware for ray tracing. I have no doubt that developers will include some DXR-powered features to show off their wares early on, but making the best use of DXR feels like it will require hardware support as a baseline feature. And as we’ve seen with past feature launches like DX11, having a new API become the baseline will likely take quite a bit of time.

The other interesting aspect of this is that Microsoft isn’t announcing DXR for the Xbox One at this time. Windows and the Xbox One are practically tied at the hip when it comes to DX12, which makes me wonder what role the consoles will have to play. After all, what finally killed DX9 in most respects was the release of the 8th gen consoles where DX11/12 functionality was a baseline. So we may see something similar happen with DXR, in which case we’re talking about a transition that’s 2+ years out.

However we should hopefully get some more answers here later this week at GDC. Microsoft is presenting a couple of different DXR-related sessions, the most important of which is DirectX: Evolving Microsoft's Graphics Platform and is being presented by Microsoft, DICE, and SEED. So stay tuned for more news from GDC.

Update:

Some of the game developers presenting about DXR this week have already posted trailers of the technology in action.

ncG1vNJzZmivp6x7orrAp5utnZOde6S7zGiqoaenZH5zgZNwZp6woJa7pbXNoGSdoaKasLXEjGppZqWZmL%2Bwv86fq2aZnqO8trrCnqpmnJmnsqTA12apmrGkp66ktc2g