July 6, 2013

Instant Radiosity

Requirements: Visual Studio 2012, Direct3D 11
Source Code: Download the code
Video: Watch the video

Finally, I would like to present my instant radiosity application for the Cornell Box [10], which is based on the articles [1] and [2]. The environment of the scene consists of a white ceiling, a white floor, a white back wall, a red left wall, a green right wall, two equal sized  white boxes and a point light source as a primary light source.

With GeForce GTX 680 graphics card, I manage to have more than 60 fps, where diffuse component of direct and indirect illumination including shadows are recomputed for each primary light source position update.

Computing direct lighting

Initially, geometry world space positions, normals and albedo are rendered into g-buffer, followed by shadow maps generation pass.

Shadows

Cube shadow maps are believed to be the standard solution for constructing shadows from a point light source.  Since a cube texture is sampled with a vector, obtaining soft shadows with percentage closer filtering (PCF) is not that straightforward. Instead, I am using virtual shadow depth cube map approach [3]. According to the provided solution, the six faces of the cube map are rendered to six tiles within a single 2D depth texture (unwrapped cube map), which could be sampled with PCF.  In my implementation the size of a shadow map is 1024x1024.

To handle filtering on the border of the tile, a small margin should be added to each tile. Moreover, while rendering the shadow map, field of view for the camera is taken more than 90 degrees [8].

On a Direct3D side, the virtual shadow cube map could be rendered in one pass using instancing mechanism in geometry shader and passing the output to the concrete tile identified by SV_ViewportArrayIndex semantic.

Now, we need to have an efficient way to access depth values from the unwrapped cube map for comparison in a shading pass, having a vector from the light center to the pixel on the object under processing. The authors in [3],[8] suggests using an indirection cube map, which is, basically, a cube texture, each face of which contains texture coordinates for redirection into a corresponding tile within the unwrapped cube map, taking into consideration tile borders for PCF.

The indirection cube map could be generated totally on a GPU side with the following steps:
  • create a cube texture with format to store texture coordinates (u, v);
  • set input layout to null and draw a triangle strip with 4 vertices defining the positions for the vertices of a cube face;
  • using SV_VertexId semantic in the vertex shader calculate clip space position of the vertex and pass it to the geometry shader;
  • using instancing mechanism in the geometry shader calculate texture coordinates associated with each of the vertex corresponding to the tile texture coordinates in unwrapped cube map;
  • pass calculated texture coordinates as an attribute of the vertex to the pixel shader along with SV_RenderTargetArrayIndex semantic to identify a cube face index;
  • in the pixel shader write received interpolated texture coordinates to the render target presenting a face of cube map.

Lighting

During a full screen triangle pass, we compute diffuse contribution with shadow intensity from  direct lighting from the primary light source based on initially filled g-buffer content. 

Direct lighting + direct shadows

Computing indirect lighting 

Depositing virtual point lights 

After the position of a primary light has been changed or we are doing this for the first time, virtual point lights have to be recalculated. In this implementation 256 vpls are employed per primary light source. To define direction of vpls, uniformly distributed sampling points on a unit sphere are computed. As soon as sampling points have been evaluated on the application side, they are passed to the dedicated compute shader stage where ray tracing with the scene takes place. Each executing thread, presenting a shooting ray, is responsible for finding the nearest object in the scene intersected by the ray. The new vpl is positioned on the point where the ray hits the geometry. Its color and direction are defined by the surface color and surface normal at the hit point. In the end, the obtained data is output to the unordered access view to be later used by indirect lighting and shadows render passes.

Shadows

Next, for each vpl paraboloid shadow map (512x512) is calculated covering a hemisphere [5],[6]. If polygons large with respect to the light source, distortion of edges during paraboloid projection can occur. To address that issue hardware tessellation stage is used. Currently, the scene is tessellated statically with inner and edge tessellation factors equal to 8. More appropriate solution would be to tessellate the geometry adaptively for each vpl, calculating the screen space area of a triangle in the patch constant function and then increasing tessellation factors according to the screen space size.

Virtual point lights contribution

Instead of evaluating each vpl contribution on the whole g-buffer, the latter is split into 4x4 tiles. The tile presents low resolution version of the entire image. The result is stored in so-called split g-buffer, which contains split version of world space position and normal buffers. Normal buffer storage in the split g-buffer is depicted below.
World space normals


For each tile we assign approximately equal number of vpls and pixel shader pass is launched to compute diffuse intensity and shadow factor contributed by each of the assigned vpls to a given pixel in a tile.

Computed indirect lighting for each tile

Once the indirect lighting has been computed for each tile, the results are combined (gathered) into a single full-sized buffer.
Combined indirect lighting

After indirect lighting has been gathered a structured noise pattern (4x4) is visible because of different vpls sets for the tiles. To get rid of the artifacts, additional smooth pass is done with a geometry aware filter, taking into account the scene geometry [7]. In particular, the following filter could be used:
\(\Large filter(t)=\frac{\displaystyle\sum_{t_i\in K(t)}\omega(t,t_i)I(t_i)}{\displaystyle\sum_{t_i\in K(t)}\omega(t,t_i) }\),

where \(I(t_i)\) is a pixel value from the input image \(I\)  in the rectangular neighborhood \(K(t)\) around a pixel location \(t\) and \(\omega(t,t_i)\) is a weighting function that produces weight for two pixel locations  \(t\) and \(t_i\) so that to avoid blurring across depth discontinuities or across strong normal changes.
Indirect lighting + indirect shadows after applying geometry aware filter

Computed indirect lighting is additively blended over already computed direct lighting.

Direct + indirect lighting


Full screen triangle 

For post processing and shading passes, I am using full screen triangle approach described here [9], initially demonstrated in FXAA demo from NVidia SDK. It utilizes SV_VertexId semantic provided by the input assembler stage to generate clip space position and texture coordinates in vertex shader with binding no explicit vertex buffer with vertices data.

Interaction

  • F1 - turn on/off direct light
  • F2 - turn on/off shadows from direct light
  • F3 - turn on/off indirect light
  • F4 - turn on/off shadows from indirect light

Current settings are displayed to the right side of the window title.


  • Left/Right arrow - move the light to the left or right side
  • Up/Down arrow - move the light forward or backward
  • Ctrl + Up/Down arrow - move the light up or down

Added some more screen shots to have better observation of color bleeding


References

1.Hannu Saransaari, Samuli Laine, Janne Kontkanen, Jaakko Lehtinen, Timo Aila, "Incremental Instant Radiosity" in Shader X6: Advanced Rendering Techniques, 2008
2.Samuli Laine, Hannu Saransaari, Janne Kontkanen, Jaakko Lehtinen, Timo Aila, "Incremental Instant Radiosity for Real-Time Indirect Illumination" in Eurographics Symposium on Rendering, 2007
3.Gary King and William Newhall, "Efficient Omnidirectional Shadow Maps" in Wolfgang Engel. ShaderX 3: Advanced Rendering with DirectX and OpenGL, 2005
4.Alexender Keller and Wolfgang Heidrich, "Interleaved Sampling"
5.Stefan Brabec Thomas Annen Hans-Peter Seidel, "Shadow Mapping for Hemispherical and Omnidirectional Light Sources"
6.Jason Zink "Dual Paraboloid Mapping"
7.Elmar Eisemann, Michael Schwarz, Ulf Assarsson, Michael Wimmer. 2012. "Real-Time Shadows" pp. 344-347
8.NVIDIA Presentation "GPU Programming Exposed: The Naked Truth Behind NVIDIA's Demos" pp. 130-141
9.http://www.altdevblogaday.com/2011/08/08/interesting-vertex-shader-trick/
10.http://www.graphics.cornell.edu/online/box/

September 9, 2012

Omnidirectional Shadow Maps

Requirements: Direct3D 11, Visual Studio 2012
Source code: Omnidirectional Shadow Maps



These days shadow mapping algorithm for constructing shadows on curved surfaces seems to dominate in game and film industries. The idea behind it is the following. It is assumed that you are located at the light source and render the scene from its position to a texture [1],[2]. All the points that could be seen from this position are lit and all the others are in shadow. To determine visible surfaces from the light source, you could render the scene to Z-buffer, which is called shadow (depth) map, discarding writing to color buffer. After, another render pass is executed from the viewer position. The position of each rasterized fragment is transformed to light clip space. Its z-coordinate defines its distance to the light. If it is greater than a corresponding one in the shadow map, then the fragment is shadowed otherwise it is not.

The described above technique will work alright for the case we have a spot light. Should we have a point (omni) light source which emits light in all directions, the approach above should be modified. This time we create 6 shadow maps with a separate view matrix having field of view 90 degrees along +x, -x, +y, -y, +z, -z coordinate axes so that to cover all the space around the light source. Cube texture could be the right choice for storing such kind of data.

In the vertex shader we calculate world space vertex position and world space direction from the light source to the vertex and pass them as attributes to the geometry shader. Using instancing mechanism of geometry shader we are able to render all the shadow maps in one pass [3]. The semantic SV_GSInstanceID provides the index of the shadow map we process and allows us to choose the proper light view projection matrix to calculate clip space position of the vertex and redirect the primitive to the proper face of cube texture via another semantic SV_RenderTargetArrayIndex. In the pixel shader we receive interpolated world space light direction, the length of which is written to the shadow map.

During the shading pass, we again use vertex shader to calculate world space light direction and pass it to the pixel shader. In the pixel shader, the interpolated light vector is used to sample the cube shadow map for the stored distance and its length as a comparison value.

Interaction:
Press "space" button to launch/stop torus animation.

References:
1. Tomas Akenine-Möller, Eric Haines, Nathaniel Hoffman. 2008. "Real-Time Rendering"
2. Elmar Eisemann, Michael Schwarz, Ulf Assarsson, Michael Wimmer. 2012. "Real-Time Shadows"
3. Jason Zink, Matt Pettineo, Jack Hoxley. 2011. "Practical Rendering and Computation with Direct3D 11"
4. Gerasimov Philipp, "Omnidirectional Shadow Mapping" in GPU Gems, Addison-Wesley, pp.193-2003, 2004.

Shadow Volumes

Requirements: Direct3D 11, Visual Studio 2010
Source Code: Shadow Volumes



This is another algorithm [1], [2] which allows to cast shadows on arbitrary surfaces and handle self-shadowing of the occluding objects as well. Suppose you have a light source located at some point in space and a triangle. Now, project a ray through that point and each vertex of the triangle at the infinity. You will have an infinity pyramid. All the points within the volume of the pyramid below the triangle will be shadowed. That is why, it is often called "shadow volume".

Let us assume the camera is located outside the shadow volume and looks along the ray at some view sample. Each time the ray goes through the front-face of the shadow volume, we increment a counter. The moment it goes through the back-face of the shadow volume, that is exits it, we decrement the counter. We can claim, in the end, that view sample is illuminated if the counter is zero; otherwise, it is shadowed. To imitate ray intersection with shadow volumes, stencil buffer could be used.

The steps of the algorithm are the following:
1. Clear stencil buffer with zero values.
2. Render the whole scene writing only ambient lighting components to the color buffer and updating depth buffer.
3. Disable writing to color and depth buffer. Depth test itself still should be done. Render font and back faces of the shadow volumes incrementing and decrementing, correspondingly, values in stencil buffer.
4. Render the whole scene again with alpha blending enabled writing only diffuse and specular material-light interaction components if stencil test passes equality to zero.

The approach above demands us to generate shadow volumes for each triangle in the mesh (three quadrilaterals per pyramid) for all the objects in the scene and, after, render them to stencil buffer. Usually, this is very resource and time-consuming and can hardly be applied in practice.

In fact, only silhouette edges of the mesh should produce shadow volume quadrilaterals [1]. A silhouette edge is an edge which separates front-facing and back-facing triangles of the mesh toward the light source. To efficiently identify them, the mesh should have a triangle adjacency list topology. Then, geometry shader can be used to calculate normals of two neighboring triangles and direction from the edge to the light source [3]. If it gets out, indeed, to be a silhouette edge, than two vertices defining the opposite edge of the quadrilateral are generated, and the primitive is passed further to the rasterizer.

Interaction:
Press "space" button to launch/stop torus animation.

References:
1. Tomas Akenine-Möller, Eric Haines, Nathaniel Hoffman. 2008. "Real-Time Rendering"
2. Elmar Eisemann, Michael Schwarz, Ulf Assarsson, Michael Wimmer. 2012. "Real-Time Shadows"
3. Jason Zink, Matt Pettineo, Jack Hoxley. 2011. "Practical Rendering and Computation with Direct3D 11"

Projection Shadows (Multiple Light Sources)

Requirements: Direct3D 11, Visual Studio 2010
Source Code: Projection Shadows (Multiple Light Sources)

This is the extension of the previous demo for the case we have got a few point light sources.


Projection Shadows

Requirements: Direct3D 11, Visual Studio 2010
Source Code: Projection Shadows



Projection shadow could be a good option when object casts shadow on a planar surface. 

References:
1. Tomas Akenine-Möller, Eric Haines, Nathaniel Hoffman. 2008. "Real-Time Rendering" pp. 333-336
2. Elmar Eisemann, Michael Schwarz, Ulf Assarsson, Michael Wimmer. 2012. "Real-Time Shadows" pp. 21-26
3. Jason Zink, Matt Pettineo, Jack Hoxley. 2011. "Practical Rendering and Computation with Direct3D 11"

Blinn-Phong Lighting

Requirements: Direct3D 9, Visual Studio 2010
Source code: Blinn-Phong Lighting

This demo presents Blinn-Phong shading mode for a torus object. Essentially, it demonstrates the quality of the image for the three cases:
1. Shading is done totally in vertex shader (per vertex shading)

2. Shading is done both in vertex and pixel shaders (per pixel shading)

3. Shading is done totally in pixel shader (true per pixel shading)


References:
1. Edward Angel, David Shreiner. 2011. "Interactive Computer Graphics: a Top-Down Approach with Shader Based OpenGL" - 6th ed. pp. 257-298
2. Kris Gray. 2003. "DirectX 9. Programmable Graphics Pipeline"

August 7, 2012

Hello, hello!

My name is Mykola and I work as a game programmer for Ubisoft Entertainment (Ukraine) on Assassin's Creed project under a PC platform.