In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for realtime applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
In this project, you will build a program called Ray
that will generate raytraced images of complex scenes. The ray tracer should trace rays recursively using the Whitted illumination model.
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be recast into the scene.
It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
 Advantages over other rendering methods:
Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. The computational independence of each ray makes ray tracing amenable to parallelization.
 Disadvantages:
A serious disadvantage of ray tracing is performance (while it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels onscreen). Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial antialiasing and improve image quality where needed.
Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required.
The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give far more accurate simulation of realworld lighting.
for every pixel {
cast a ray from the eye
for every object in the scene {
find intersections with the ray keep it if closest
}
compute color at the intersection point
}
We'll describe these requirements in more detail afterwards:
 Parse in the scene, material, light and other informations. (5%)
 Fast local rendering with OpenGL. (5%)
 Generate the tracing rays from the camera. (5%)
 Use the intersection mechanism in Engine for rayobject intersection computation. (5%)
 Implement the Whitted illumination model, which includes Phong shading (emissive, ambient, diffuse, and specular terms) as well as reflection and refraction terms. You only need to handle directional and point light sources, i.e. no area lights, but you should be able to handle multiple lights. (20%)
 Implement Phong interpolation of normals on triangle meshes. (10%)
 Implement antialiasing. Regular supersampling is acceptable, more advanced antialiasing will be considered as an extension. (10%)
 Implement data structures that speed up the intersection computations in large scenes. There will be a contest at the end of the project to determine the team with the fastest ray tracer. (10%)

Notes on Whitted's illumination model
The first three terms in Whitted's model will require you to trace rays towards each light, and the last two will require you to recursively trace reflected and refracted rays. (Notice that the number of reflected and refracted rays that will be calculated is limited by the "depth" setting in the ray tracer. This means that to see reflections and refraction, you must set the depth to be greater than zero!)When tracing rays toward lights, you should look for intersections with objects, thereby rendering shadows. If you intersect a semitransparent object, you should attenuate the light, thereby rendering partial (colorfiltered) shadows, but you may ignore refraction of the light source.
The skeleton code doesn't implement Phong interpolation of normals. You need to add code for this (only for meshes with pervertex normals.)
Here is a document that lists equations that will come in handy when writing your shading and ray tracing algorithms.

Antialiasing
Once you've implemented the shading model and can generate images, you will notice that the images you generated are filled with "jaggies". You should implement an antialiasing technique to smooth these rough edges. In particular, you are required to perform supersampling and averaging down. You should provide a slider which allows the user to control the number of samples per pixel (1, 4, 9 or 16 samples). You need only implement a box filter for the averaging down step. More sophisticated antialiasing methods are left as bells and whistles below. 
Acclerated raysurface intersection
The goal of this portion of the assignment is to speed up the raysurface intersection module in your ray tracer. In particular, we want you to improve the running time of the program when ray tracing complex scenes containing large numbers of objects (they are usually triangles). There are two basic approaches to do this: Specialize and optimize the rayobject intersection test to run as fast as possible.
 Add data structures that speed the intersection query when there are many objects.
Most of your effort should be spent on approach 2, i.e. reducing the number of rayobject intersection tests. You are free to experiment with any of the acceleration schemes described in Chapter 6, ''A Survey of Ray Tracing Acceleration Techniques,'' of Glassner's book. Of course, you are also free to invent new acceleration methods.
Make sure that you design your acceleration module so that it is able to handle the current set of geometric primitives  that is, triangles spheres, squares, boxes, and cones.
The sample scenes include several simple scenes and three complex test scenes: trimesh1, trimesh2, and trimesh3. You will notice that trimesh1 has pervertex normals and materials, and trimesh2 has pervertex materials but not normals. Pervertex normals and materials imply interpolation of these quantities at the current raytriangle intersection point (using barycentric coordinates).
All your handin must be put in a directory with your student ID and the following is the list of handin files under the directory.
 Program and source: As usual, you must hand in everything needed to build and run your program, including all texture files and other resources.
 Gallery: Please put a few JPG pictures of the rendering results at least three. Please name the pictures IDX.jpg (where X is a number).
 Readme.txt:
 Instructions on how to use your program (in case we want to use it when you're not around)
 Descriptions of what your program does to meet all of the minimum requirements.
 Technical.txt:
 The report could contain a description of this project, what you have learned from this project, description of the algorithm you implemented, implementation details, results (either good or bad), and what extensions you have implemented.
Here are some examples of effects you can get with ray tracing. Currently none of these were created from past students' ray tracers.
 Implement an adaptive termination criterion for tracing rays, based on ray contribution. Control the adaptation threshold with a slider.
 Implement stochastic (jittered) supersampling. See Glassner, Chapter 5, Section 4.1  4.2 and the first 4 pages of Section 7
 Modify shadow attenuation to use Beer's law, so that the thicker objects cast darker shadows than thinner ones with the same transparency constant. (See Shirley p. 214.)
 Include a Fresnel term so that the amount of reflected and refracted light at a transparent surface depend on the angle of incidence and index of refraction. (See Shirley p. 214.)
 Add a menu option that lets you specify a background image to replace the environment's ambient color during the rendering. That is, any ray that goes off into infinity behind the scene should return a color from the loaded image, instead of just black. The background should appear as the backplane of the rendered image with suitable reflections and refractions to it.
 Deal with overlapping objects intelligently. In class, we discussed how to handle refraction for nonoverlapping objects in air. This approach breaks down when objects intersect or are wholly contained inside other objects. Add support to the refraction code for detecting this and handling it in a more realistic fashion. Note, however, that in the real world, objects can't coexist in the same place at the same time. You will have to make assumptions as to how to choose the index of refraction in the overlapping space. Make those assumptions clear when demonstrating the results.
 Implement spot lights.
 Implement antialiasing by adaptive supersampling, as described in Glassner, Chapter 1, Section 4.5 and Figure 19 or in Foley, et al., 15.10.4. For full credit, you must show some sort of visualization of the sampling pattern that results. For example, you could create another image where each pixel is given an intensity proportional to the number of rays used to calculate the color of the corresponding pixel in the ray traced image. Implementing this bell/whistle is a big win  nice antialiasing at low cost.
 Add some new types of geometry to the ray tracer. Consider implementing torii or general quadrics. Many other objects are possible here.
 Implement more versatile lighting controls, such as the Warn model described in Foley 16.1.5. This allows you to do things like control the shape of the projected light.
 Implement stochastic or distributed ray tracing to produce one or more or the following effects: depth of field, soft shadows, motion blur, glossy reflection (See Glassner, chapter 5, or Foley, et al., 16.12.4).
 Add texture mapping support to the program. To get full credit for this, you must add uv coordinate mapping to all the builtin primitives (sphere, box, cylinder, cone) except trimeshes. The square object already has coordinate mapping implemented for your reference. The most basic kind of texture mapping is to apply the map to the diffuse color of a surface. But many other parameters can be mapped. Reflected color can be mapped to create the sense of a surrounding environment. Transparency can be mapped to create holes in objects. Additional (variable) extra credit will be given for such additional mappings. The basis for this bell is built into the skeleton, and the parser already handles the types of mapping mentioned above. Additional credit will be awarded for quality implementation of texture mapping on general trimeshes.
 Implement bump mapping (Watt 8.4; Foley, et al. 16.3.3). Check this out!
 Implement solid textures or some other form of procedural texture mapping, as described in Foley, et al., 20.1.2 and 20.8.3. Solid textures are a way to easily generate a semirandom texture like wood grain or marble.
 Extend the raytracer to create Single Image Random Dot Stereograms (SIRDS). Click here to read a paper on how to make them. Also check out this page of examples. Or, create 3D images like this one, for viewing with redblue glasses.
 Implement 3D fractals and extend the .ray file format to provide support for these objects. Note that you are not allowed to "fake" this by just drawing a plain old 2D fractal image, such as the usual Mandelbrot Set. Similarly, you are not allowed to cheat by making a .ray file that arranges objects in a fractal pattern, like the sier.ray test file. You must raytrace an actual 3D fractal, and your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different fractal algorithms or modifying the base pattern used to produce the fractal. Here are two really good examples of raytraced fractals that were produced by students during a previous quarter: Example 1, Example 2. And here are a couple more interesting fractal objects: Example 3, Example 4
 Implement 4D quaternion fractals and extend the .ray file format to provide support for these objects. These types of fractals are generated by using a generalization of complex numbers called quaternions. What makes the fractal really interesting is that it is actually a 4D object. This is a problem because we can only perceive three spatial dimensions, not four. In order to render a 3D image on the computer screen, one must "slice" the 4D object with a three dimensional hyperplane. Then the points plotted on the screen are all the points that are in the intersection of the hyperplane and the fractal. Your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different generating equations, changing the slicing plane, or modifying the surface attributes of the fractal. Here are a few examples, which were created using the POVRay raytracer (yes, POVRay has quaternion fractals built in!): Example 1, Example 2,Example 3, Example 4. And, this is an excellent example from a previous quarter. To get started, visit this web page to brush up on your quaternion math. Then go to this site to learn about the theory behind these fractals. Then, you can take a look at this page for a discussion of how a raytracer can perform intersection calculations.

Implement a more realistic shading model. Credit will vary depending on the sophistication of the model. A simple model factors in the Fresnel term to compute the amount of light reflected and transmitted at a perfect dielectric (e.g., glass). A more complex model incorporates the notion of a microfacet distribution to broaden the specular highlight. Accounting for the color dependence in the Fresnel term permits a more metallic appearance. Even better, include anisotropic reflections for a plane with parallel grains or a sphere with grains that follow the lines of latitude or longitude. Sources: Shirley, Chapter 24, Watt, Chapter 7, Foley et al, Section 16.7; Glassner, Chapter 4, Section 4; Ward's SIGGRAPH '92 paper; Schlick's Eurographics Rendering Workshop '93 paper. This all sounds kind of complex, and the physics behind it is. But the coding doesn't have to be. It can be worthwhile to look up one of these alternate models, since they do a much better job at surface shading. Be sure to demo the results in a way that makes the value added clear. Theoretically, you could also invent new shading models. For instance, you could implement a less realistic model! Could you implement a shading model that produces something that looks like cel animation? Variable extra credit will be given for these "alternate" shading models. Links to ideas: Comic Book Rendering. Note that you must still implement the Phong model.

Implement CSG, constructive solid geometry. This extension allows you to create very interesting models. See page 108 of Glassner for some implementation suggestions. An excellent example of CSG was built by a grad student here in the grad graphics course.

Add a particle systems simulation and renderer (Foley 20.5, Watt 17.7, or see instructor for more pointers).

Implement caustics by tracing rays from the light source and depositing energy in texture maps (a.k.a., illumination maps, in this case). Caustics are variations in light intensity caused by refractive focusingeverything from simple magnifyingglass points to the shifting patterns on the bottom of a swimming pool. A paper discussing some methods. 2 bells each for refractive and reflective caustics. (Note: caustics can be modeled without illumination maps by doing "photon mapping", a monster bell described below.) Here is a really good example of caustics that were produced by two students during a previous quarter: Example
There are innumerable ways to extend a ray tracer. Think about all the visual phenomena in the real world. The look and shape of cloth. The texture of hair. The look of frost on a window. Dappled sunlight seen through the leaves of a tree. Fire. Rain. The look of things underwater. Prisms. Do you have an idea of how to simulate this phenomenon? Better yet, how can you fake it but get something that looks just as good? You are encouraged to dream up other features you'd like to add to the base ray tracer. Obviously, any such extensions will receive variable extra credit depending on merit (that is, coolness!). Feel free to discuss ideas with the course staff before (and while) proceeding!

SubSurface Scattering
The trace program assigns colors to pixels by simulating a ray of light that travels, hits a surface, and then leaves the surface at the same position. This is good when it comes to modeling a material that is metallic or mirrorlike, but fails for translucent materials, or materials where light is scattered beneath the surface (such as skin, milk, plants... ). Check this paper out to learn more.

Metropolis Light Transport
Not all rays are created equal. Some light rays contribute more to the image than others, depending on what they reflect off of or pass through on the route to the eye. Ideally, we'd like to trace the rays that have the largest effect on the image, and ignore the others. The problem is: how do you know which rays contribute most? Metropolis light transport solves this problem by randomly searching for "good" rays. Once those rays are found, they are mutated to produce others that are similar in the hope that they will also be good. The approach uses statistical sampling techniques to make this work. Here's some information on it, and a neat picture. 
Photon Mapping
Photon mapping is a powerful variation of ray tracing that adds speed, accuracy and versatility. It's a twopass method: in the first pass photon maps are created by emitting packets of energy photons) from the light sources and storing these as they hit surfaces within the scene. The scene is then rendered using a distribution ray tracing algorithm optimized by using the information in the photon maps. It produces some amazing pictures. Here's some information on it.
 General
 An Improved Illumination Model for Shaded Display, T. Whitted, CACM, 1980, pp 343349
 An Introduction to Ray Tracing, Andrew S. Glassner. (Chap. 6 for acceleration)
 Space Subdivision
 Ray Tracing with the BSP Tree, K Sung & P. Shirley. Graphics Gems III.
 ARTS: Accelerated RayTracing System, A. Fujimoto et. al. CG&A April 1986, pp 1625.
 A Fast Voxel Traversal Algorithm for Ray Tracing, J. Amanatides & A. Woo. Eurographics'87, pp 39
 Faster Ray Tracing Using Adaptive Grids, K. Klimaszewski & T. Sederberg. CG&A Jan. 1997, pp 4251 (It is claimed to be the fastest algorithm so far.)
 Hierarchical Bounding Volume
 Automatic Creation of Object Hierarchies for Ray Tracing, J. Goldsmith & J. Salmon. CG&A May 1987, pp 1420.
 Efficiency Issues for Ray Tracing, B. Smits. Journal of Graphics Tools, Vol. 3, No. 2, pp. 114, 1998.
 Ray Tracing News, Vol. 10, No. 3.
 Tips
 Fast RayBox Intersection, A. Wu Graphics Gems.
 Improved Ray Tagging for VoxelBased Ray Tracing, D. Kirk & J. Arvo. Graphics Gems. II
 Rectangular Bounding Volumes for Popular Primitives, B. Trumbore. Graphics Gems. III
 A LinearTime Simple Bounding Volumn Algorithm, X. Wu. Graphics Gems. III
 A Fast Alternative to Phong's Specular Model, Christophe Schlick Graphics Gems IV.
 Voxel Traversal along a 3D Line, D. Cohen. Graphics Gems. IV.
 Faster Refraction Formula, and Transmission Color Filtering, Ray Tracing News, Vol. 10, No. 1.