Project 3: Incremental Instant Radiosity Implementation
Introduction

images.jpg

Instant radiosity is important techniques for real-time global illuminatio which is a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any finite element kernel or solution discretization of the underlying integral equation. However, its efficiency is still limited. In addition to regenerating all point light sources, incremental instant radiosity is porposed for rendering single-bounce indirect illumination in real time on currently available graphics hardware. The method is based on the instant radiosity algorithm, where virtual point lights (VPLs) are generated by casting rays from the primary light source. Hardware shadow maps are then employed for determining the indirect illumination from the VPLs. Our main contribution is an algorithm for reusing the VPLs and incrementally maintaining their good distribution. As a result, only a few shadow maps need to be rendered per frame as long as the motion of the primary light source is reasonably smooth. This yields real-time frame rates even when hundreds of VPLs are used.

hqdefault.jpg

Simple Algorithm Overview

Instant radiosity process: For a full description of the algorithm, see the original paper.... However, here is a (very) brief overview.  At runtime, a number of photons are selected from the lightsources to be cast into the scene. In addition, the mean reflectivity r of the scene is calculated.  Initially there are N photons. For each photon, the scene is rendered, placing a point light source at the origin of the photon.  Then, rN of these photons are cast into the scene. When a photon hits a surface, it is attenuated by the diffuse component of that surface. Then the scene is rendered again, with the point lightsource appropriately moved. r2N of the original points continue on to a second bounce.

 

radiosity.PNGThe process is repeated until all of the paths are complete.

Radiosity_Progress.png

In essence, this algorithm is generating an increasingly accurate approximation of the average radiance for every point in the scene with the rendering pass.

 

Incremental process

reuse.png

You must first justistify the existence of each VPLs because the generation rays may be occluded due to the movement of the light sources and objects. You must remove all invalid VPLs. You must also adjust the contribution of valid VPLs. Furthermore, you must increase the VPLs to the desired numbers. Finally, you render the scene with all existing VPLs.

Pros and Cons

The advantages and disadvantages of the incremental instant radiosity is as follows:

Significant advantages:

  • Can use OpenGL hardware to decrease rendering time
  • Computed solution can be displayed directly
  • Low memory requirements, since it works in image space rather than creating matrix elements
  • Algorithm can be extended to allow specular surfaces
  • Radience contribution from textures is directly computed

Disadvantages:

  • View dependent (although Keller does present a method of generating walk-throughs in the paper)
  • This method is not terribly accurate, it generates nice pictures, but should not be used for predictive results.
  • Final output quality is dependant on hardware capabilities. For instance, the accumulation buffer needs to be fairly deep to allow large numbers of images to be composited.
  • Doesn't work as well on scenes lit primarily by indirect lighting. Since it uses a quasi-random walk to distribute photons, several hundred frames may be required to get photons to arrive where they are needed.

 

The Basic Task

Radiosity is a global illumination algorithm that handles diffuse interreflections between surfaces. 

Instant Radiosity uses a Quasi-Monte Carlo approach to solve this integral and creates point light sources at each bounce for rays cast from the light source. If the light sources and resulting bounces are sampled adequately, this yields a good approximation of the global lighting in the scene. The core algorithm (with indicated modifications)is as follows:

未命名.png


Required Functionalities
  1. Use P1's parser to parse in the scene, material, light and other informations. 
  2. Fast local rendering with OpenGL. 
  3. Shoot out light rays from light sources to generate VPLs. (5%)
  4. Visualize these VPLs with OpenGL. (5%)
  5. Generate a shadow map for each VPL. (10%)
  6. Rendering the scene by raserizing all scene triangles by shading with all VPLs with their shadow map. (10%)
  7. Dynamics updating the characteristics of VPLs with incremenatl instant radiosity. (25%)
    1. Remove invalid VPLs.
    2. Adjust valid VPLs.
    3. Add new VPLs.
  8. Acceleration with KD-tree or BVH (15%)
What to Hand in

All your hand-in must be put in a directory with your student ID and the following is the list of hand-in files under the directory. 

  • Program and source: As usual, you must hand in everything needed to build and run your program, including all texture files and other resources.
  • Gallery: Please put a few JPG pictures of the rendering results at least three. Please name the pictures ID-X.jpg (where X is a number).
  • Read-me.txt
    • Instructions on how to use your program (in case we want to use it when you're not around)
    • Descriptions of what your program does to meet all of the minimum requirements.
  • Technical.txt:
    • The report could contain a description of this project, what you have learned from this project, description of the algorithm you implemented, implementation details, results (either good or bad), and what extensions you have implemented. 
Advanced Technologies
  • Sampling on area lights

Sampling is used to approximate an integral of a function as the average of the function evaluated at a set of points. Mathematically:

If Xi = i/N, the sampling is rectangular. If Xi is pseudo random or random, we call it as Monte Carlo sampling. If the sequence Xi has a low discrepancy, we term it as Quasi-Monte Carlo sampling. Loosely speaking, low discrepancy implies that a graphical representation of the sequence does not have regions of unequal sample density. For instance,consider the images below: the image to the left has low discrepancy.

    1. Halton sampling is a Quasi Monte Carlo sampling technique that is deterministic. In 2D, it uses pairs of numbers generated from Halton sequences. These sequences are based on a prime number and can be constructed as follows: Pick a prime P and the number of desired samples N. Divide the interval [0,1] in this fashion: 1/p, 1/p^2, 2/p^2, ... p^2/p^2, 1/p^3 .. till N unique fractions are created. To generate a sample in 2D, pick primes P, Q ,generate the corresponding sequences and pair the numbers. It is recommended that the first 20 samples are discarded for higher primes due to a high correlation between those pairs. 
    2. Poisson Disk Sampling is a form of Poisson sampling where samples are guaranteed to be separated by a specified distance (radius). There are numerous techniques to generate Poisson Disk Samples efficiently such as [8] and [9]. However, for a low number of samples, we used the Dart Throwing technique and cache results. To do this, we generate points and discard those that do not meet the radius criterion. This process is continued till N points are reached. In our implementation, we create different sets of samples such that the same set is used for a specific bounce. We believe that this is similar to the approach taken in the original paper where each reflection uses samples based on a set of primes (2j+2, 2j+3) where j is the reflection count.
    3. Picking the right sampling is the key to getting impressive results using IR. Sampling is used in two areas in this project: to pick points on the light source, and to choose direction to shoot rays from the selected point. Each of these requires a mapping from samples on a unit square to those on triangles or on hemispheres. Fortunately, these are described in Graphics Gems III (relevant page available on Google books). These samples need to be weighted based on the area of triangle. Further, there are other technical considerations that affect scene independent implementations. For instance, increasing the number of samples per light source will result in a brighter scene unless the intensity of the original samples are weighted accordingly. Similarly, in open environments,the intensity of each light needs to be attenuated by the total number of created lights (and not estimated hits).

kitchengrand-central

Lightcuts is a scalable framework for computing realistic illumination. It handles arbitrary geometry, non-diffuse materials, and illumination from a wide variety of sources including point lights, area lights, HDR environment maps, sun/sky models, and indirect illumination. At its core is a new algorithm for accurately approximating illumination from many point lights with a strongly sublinear cost. We show how a group of lights can be cheaply approximated while bounding the maximum approximation error. A binary light tree and perceptual metric are then used to adaptively partition the lights into groups to control the error vs. cost tradeoff.

We also introduce reconstruction cuts that exploit spatial coherence to accelerate the generation of anti-aliased images with complex illumination. Results are demonstrated for five complex scenes and show that lightcuts can accurately approximate hundreds of thousands of point lights using only a few hundred shadow rays. Reconstruction cuts can reduce the number of shadow rays to tens.

Renndering complex scenes with indirect illumination, high dynamic range environment lighting, and many direct light sources remains a challenging problem. Prior work has shown that all these effects can be approximated by many point lights. This paper presents a scalable solution to the many-light problem suitable for a GPU implementation. We view the problem as a large matrix of samplelight interactions; the ideal final image is the sum of the matrix columns. We propose an algorithm for approximating this sum by sampling entire rows and columns of the matrix on the GPU using shadow mapping. The key observation is that the inherent structure of the transfer matrix can be revealed by sampling just a small number of rows and columns. Our prototype implementation can compute the light transfer within a few seconds for scenes with indirect and environment illumination, area lights, complex geometry and arbitrary shaders. We believe this approach can be very useful for rapid previewing in applications like cinematic and architectural lighting design. 

Reference

[1] Keller, Alexander, "Instant Radiosity", Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, July, 1997

[2] Samuli Laine, Hannu Saransaari, Janne Kontkanen, Jaakko Lehtinen, and Timo Aila, "Incremental Instant Radiosity for Real-Time Indirect Illumination"
in, "Proc. EGSR 2007", June 2007