Levicky Final Draft

Eric-Levicky

Photon Mapping – Final Draft

Pixar Studios uses global illumination to make their animated movies. On Pixar’s website they list the steps of making a movie. The seventh step is where computers are brought into the equation to set up the 3D objects and scene. Step thirteen of making a movie is rendering. “Pixar’s Renderfarm is a huge computer system that interprets the data and incorporates motion blur. Each frame represents 1/24 of a second of screen time and takes about six hours to render, though some frames have taken as many as ninety hours” (Pixar). With the rendering phase taking so long it is imperative that it is as efficient and realistic as possible.

Two very popular and powerful methods of global illumination are Ray Tracing and Photon Mapping. Ray Tracing is technique that shoots rays out from a single viewpoint and then calculates what color each ray should project based on what and how the ray bounces off elements (Benander). Photon Mapping takes an opposite approach. In Photon Mapping photons are shot out from the light source and then they continue to bounce around the scene. Ray Tracing works well when shadows and other lighting effects are not prevalent; however, in most situations shadows are unavoidable. Ray Tracing sends rays out from the viewer, or camera, in the scene and these rays bounce around the scene as Photon Mapping photons do. Starting at the light source allows for more realistic lighting effects with less manual manipulation of the lighting.

Introduction to Photon Mapping

Photon Mapping was developed by Henrik Jensen as an efficient alternative to ray tracing techniques (Walters). The concept idea behind Photon Mapping is to send photons out from the light source into the scene, from here what photons hit the camera will be displayed. This method separates illumination and the geometric calculations. This allows for the elaborate equations to be calculated and stored separately. The benefit to calculating and storing separately is that a graphical representation of the scene can be used to determine the necessary illumination calculations before executing all calculations; the result is less time to render a scene (Walters). A scene with few light sources and non reflective materials renders quickly. However, scenes that have many light sources, reflective materials, and complex polygons would take a significant amount of more time to render.

Benefits

*Two-Pass Method
Photon Mapping implements the use of a two-pass method. The first pass of this two-path method is to create the photon map (Walters). The photon map is the recorded interaction with elements in the scene (Purcell). The second pass is the rendering pass; this pass estimates diffuse indirect illumination (Purcell). The two-pass method tries to shorten the time to render by calculating the interaction with vectors and elements in the first pass and then in the second to render the image.

*Lighting Effects
Ray tracers do not usually calculate ambient light nor lighting effects from refracted rays that emanate from the light source, otherwise known as caustics. Caustics are the concentration of light by reflective and refractive objects. "Caustics are formed by light that is reflected or transmitted by a number of specular surfaces before interacting with a diffuse surface. Examples of caustics are the light patterns on the bottom of a swimming pool and light focused onto a table through a glass of cognac" (Jensen). Ray tracers must fake ambient light. Photon mapping allows for ambient light and caustics to be calculated and not faked. An example of an image created using both ray tracing and photon mapping methods called "Yafaray" is below. This is an open source scene rendering program that renders very realistic images mainly using ray tracing techniques, it implores the use of photon mapping only in special cases (Estévez).

gabichloft.jpg

As you can see, this image is very realistic and they did an excellent job of rendering ambient light in this image. A download of this open source program is available at Yafaray Download. Yafaray accounts for caustics using a Photon Mapper for areas where reflective surfaces are detected. It considers the materials when rendering images, "Shiny Diffuse, Glossy, Coated Glossy, and Glass" materials are the four core options, but these may be mixed to produce any material (Estévez). This method it uses photon mapping to render the caustics; however, it actually maps them backwards from traditional photon mapping. This means that each "photon" emanates from the camera or eye and then is beamed out until it reaches a reflective surface, when it encounters this surface it sends directed photons from the light source towards the reflective element. The use of the Photon Mapping method in this situation increases the realness of the scene by including caustics. An example of caustics using the yafaray ray tracer follows (Estévez).

causticsexample.jpg

Traditional Photon Mapping does not have to change the way it renders a scene based upon the materials it encounters, rendering always happens the same way. "[Caustics] can in general only be done using the photon map" (Jensen). If the Yafaray ray tracer recognizes that there is a reflective, specifically glass, surface it uses the photon mapping method on it.

Drawbacks

*Wasted Calculations
The truth is that the first pass takes a very long time and that there are many of wasted calculations. These calculations are wasted because there are many occurrences where the back of a scene never comes into play, but there are countless calculations made for angles there anyways. For this reason a lot of people think ray tracers are better, I argue that they are not because the benefit of realism from Photon Mapping outweighs the cost of the calculations.

*Low Density of Photons
Another drawback to this method is that projection reflective surfaces is not always as dense as it should be. This can be countered by sending out more rays per pixel. The cost to this would be a significant increase in time it takes to calculate the interaction with all of the rays and the scene. Another solution is to use a ray tracing approach to detect reflective surfaces and then have the light source project more photons in the direction of that object.

*Hours to Render a Frame
Time is money, the major drawback to is the time that it takes to render the scene when using photon mapping. It takes a long time to render a scene because of many wasted calculations. A mass of vectors are sent out from the light source without respect to where the viewer or camera is, in traditional photon mapping many more vectors are calculated than only those vectors that affect the camera's view. This is where ray tracing is better. A ray tracer only calculates those rays that affect the result of what the camera sees.

*Must Pre-Render Each Scene
Because of this massive amount of time it takes to render a scene, when using this method all scenes must be pre-rendered. Pre-rendering a scene means that all of the calculations and colors are written before the scene is displayed. Due to the multitude of calculations needed to create a scene real-time rendering is impossible. Therefore, the use of photon mapping in real time settings, such as video games, is not possible.

Ideology Behind Photon Mapping

Photon Emission

Photon emission starts from the light source. All Photons are initially sent out as rays of light, which are referred to as vectors (Purcell 2). Vectors are sent out in lots of different directions based upon which kind of light source is emitting the light. There are four different types of light sources: Diffuse Point Light, Spherical Light, Square Light, Complex Light (Walters). The number of photons emitted by each light is dependent on the number of light sources in the scene and the power of each light source. The more powerful a light source the more photons it emits (Walters).

lightSources.gif

*Point Light Sources
For the Diffuse Point Light sources the goal is to emit photons uniformly in all directions.

*Area Light Sources
For area light sources, such as Spherical, Square, and Complex lights a random position on the surface is chosen and then a random direction is chosen to shoot the vector off in.

*Solution for Unseen Photons
The drawback to both of these types of light sources is that most photons will be lost and never seen. For this reason a Projection Map is used. These optimize photon emission by directing photons towards important objects. "Projection maps are typically implemented as bitmaps that are warped onto a bounding shape for the light source where each bit determines whether geometry of importance is in that direction" (Walters).

*Importance of Projection Maps
Projection maps are especially important for caustics. "Caustics are generated from focused light coming from specular surfaces and require a higher density of photons for an accurate radiance estimate" (Walters). Using a Projection Map is a way of faking results. In Jason Dengler's talk he said that if the calculations are all made and there is not faking, then it is wrong. It takes too much computing to actually generate a scene based solely on physics. For this reason, instead of shooting more vectors from the light sources, a projection map allows for more vectors to be focused towards elements of interest. This significantly cuts down on the amount of calculations that would have been made if that volume of vectors had projected in all directions.

Photon Scattering

Whenever a ray of light encounters an element part of the ray is absorbed, part is reflected, and part is refracted. This is accounted for in Photon Mapping by splitting photons. This means the original photon is copied with the power level of the original photon distributed between each child photon. This way depending on the surface the ray comes in contact with the ray can act differently. On more reflective elements the power associated with the photon that is absorbed will be quite small, the power associated with the reflected photon will be high, and the refracted photons will also not have high power values.

*Absorbed Photons
When part of photon is absorbed that means that it is no longer able to bounce around the scene or provide light. Absorption occurs when a material does not have much reflectivity. For instance, a black matte paint absorbs more light than a white paint that reflects the light. The black matte paint is both a dark color that does not reflect as much light and has a matte finish that takes away from its reflectivity even more. Therefore, the absorption percentage on the black paint would be a lot higher than that of the white paint. As for colors, when photons that do not contain the color of the element come into contact with the element the photon is absorbed by the element. The image below exemplifies this lighting effect.

absorb.jpg

*Reflected Photons
When a photon is reflected that means that the ray changes direction and that point on the element gets the color of whatever element the vector comes in contact with next. A photon is reflected when the absorption of an element is very low. A photon reflects at the same angle that it encounters the element at (angle of incidence).

reflection.jpg

*Refracted Photons
Refraction is the bending of a wave when it encounters an element that forces the wavelength of the light wave to change. The refraction of light when it passes from a low-dense medium to a high-dense medium bends the light ray toward the normal (see the image below). When speaking about photons instead of waves this means that the direction in which the photon is traveling changes to create a bending effect. Refraction is what makes caustics possible with photon mapping.

refraction.gif

*Proper Scattering of Photons
When Jason Dengler spoke about Photon Mapping he stressed that if all calculations are made to determine the actual lighting of a scene that is not the proper approach. Faking is a necessity for the respect of both time and calculation complexity. As an alternative to the calculating how photons refract, refract, and absorb a Russian Roulette method is used. This is used randomly choose which photons are reflected, refracted, and absorbed. Russian Roulette "reduce[s] both the computational and storage costs while still obtaining the correct result" (Wilson). This way of deciding how to handle photons saves a lot of space and time when rendering scenes.

Methods Needed in Photon Mapper Library

*Code Explination
The code in the code exploration section is written from scratch by Eric Levicky. The reason this code was written was because these are classes that are necessary for making a photon mapper. They do not do the calculations of bouncing the vector around the scene. Jason Dengler advised me to stay away from those calculations because those are implemented in the actual photon mapper classes. The main focus of the written code was to allow for Photons to be constructed, elements to be constructed, and power of photons to be manipulated.

*Code Takeaways
From writing this code I learned that a lot goes into making the simplest thing. I always knew that getters and setters were necessary to keep track of instance variables, but I did not realize the extent to which this can be taken with different kinds of constructors that all basically do the same thing. Allowing for the user to implement a library of classes in different ways requires thinking about a multitude of possible implementations from the user.

When coding the power class I had to think of a way to decrease power of a photon to account for absorption. My solution to this was to decrease the color associated with the photon. When the color is black it cannot decrease anymore.

I also had to account for when a photon encounters an element. I coded it so that the photon takes the color that is darkest. This accounts for absorption of the colors that the element does not contain.

Classes Needed

Three dimensional point and vector classes are a necessity for calculating a photon's interaction with the scene. These classes have already been written and currently live in the java library. Classes that have to be written for a photon mapper library are photon, power, and element classes.

*Point Class
The point class, located at javax.vecmath.Point3d, would be necessary to locate a point in this three dimensional scene. Built in methods are viewable at point3d methods. The point class will be imported in the Photon and Element classes.

*Vector Class
The vector class can also be imported. Its location is javax.vecmath.Vector3d. Built in methods are viewable at vector3d methods. This class will be used in the element class as well as when the photons are emitted from the light source.

*Photon Class
The Photon class will use the point class to keep track of where the photon is in the scene. It is imperative that this class has getters and setters for the point because this will be changed based upon the calculations made in a photon mapper. This class must also include a variable to keep track of the power that is associated with each photon.

*Power Class
The Power class will keep track of what color to display. Therefore this class must import java.awt.Color.*. The Power class will keep track of the power by changing the color of the photon. When the photon no longer has power it will be black, which means all colors are zero. This class had methods to decrease the power of a photon by a percentage. This percentage will be passed in as a double between 0 and 1. This class will be a helper class to the element class, which is described below. When an element must absorb some power from the photon the power class is used to do so.

*Element Class
The Element class keeps track of the color, normal, and absorption of each polygon in the scene. This class must import java.awt.Color.* and javax.vecmath.Vector3d. The color is necessary to tell the photon what color to become after it hits it. For example a red block would change the photon to red when it comes into contact with the photon. This same block would have to tell the scene which way its normal vector is (a vector that goes from center of the polygon out at a 90° angle to the surface), which is why the vector3d class must be imported. Another functionality of the element class must be to determine how much power to absorb from the photon. Absorption is dependent on the color and material of the element.

The color is attained by using the encounter(Color c) method. This method makes the photon take the darkest color of the two. For instance, if a black photon encounters a white object the photon will still be black. If a white photon encounters a black object the photon takes the black color and is now black.

Goal of Each Photon

The goal of each photon is to keep track of its power level, and where it is in the scene. The photon will keep track of its color using the Power class. The photon will monitor whether it should continue calculations based upon the color of the photon. If it is black then it should no longer bounce around the scene. When the photon comes into contact with an element it mixes the color of the photon with the color of the encountered element. When a color is to be mixed with the color of the photon the greatest each color can be is that of the photon. Therefore if the red in the photon is 35 and it encounters an element with a red value of 150 the resulting value for red is 35. However, when the color of element is lower than the photon it takes that color. Therefore if the photon's red value is 210 and it encounters an element with a red value of 120 the resulting red value will be 120.

Methods Needed

When a photon, attached to a vector encounters an element it will need to be able to get the Color from the element. Therefore the element class will need a getColor method for each polygon. It will also need a getFace() method so that the normal vector can be gotten from the polygon. This determines how vectors bounce off the polygon, when bouncePhoton is called. The bouncePhoton method is a complex method that involves quite a bit of mathematics, but it will return the vector that the photon will become after encountering this element. A getAbsorbancy() method will be needed to determine how much of the power of each photon is absorbed by the element.

Code Exploration

Photon Class

import java.awt.Color;
import javax.vecmath.Point3d;

public class Photon {
    //instance variables
    Point3d location;
    Power rgbPower;

    public Photon(Point3d p, Power power)
    {
        location = p;
            rgbPower = power;
    }

    public Photon(Point3d p, Color c)
    {
        this(p, new Power(c));
    }

    public Photon(double x, double y, double z, Power p)
    {
        this(new Point3d(x,y,z), p);
    }

    public Photon(double x, double y, double z, Color c)
    {
        this(new Point3d(x,y,z), new Power(c));
    }

    public boolean continueBouncing()
    {
        if (rgbPower.isBlack())
            return false;
        return true;
    }

    public Point3d getLocation()
    {
        return location;
    }

    public void setLocation(Point3d loc)
    {
        location = loc;
    }

    public void setLocation(double x, double y, double z)
    {
        Point3d pt = new Point3d(x,y,z);
        location = pt;
    }

    public Power getPower()
    {
        return rgbPower.getPower();
    }

    public void setPower(Power p)
    {
        rgbPower.setPower(p);
    }

    public void encounterElement(Power p)
    {
        rgbPower.encounterColor(p.getColor());
    }

    public void encounterElement(Color c)
    {
        rgbPower.encounterColor(c);
    }
}

Power Class

import java.awt.Color;
public class Power {
    //instance variables
    Color color;

    public Power(Color c)
    {
        color = c;
    }

    public Color getColor()
    {
        return color;
    }

    public boolean isBlack()
    {
        if ( color == newColor(0,0,0))
            return true;
        return false;
    }

    public void setPower(Color newPow)
    {
        color = newPow;
    }

    public void setPower(Power newPow)
    {
        color = newPow.getColor();
    }

    public Power getPower()
    {
        return this;
    }

    public void decreasePowerRedGreenBlue(double percent) 
    //percent will be a number between 0 and 1
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerBlueGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerRedGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(0), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerRedBlue(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerRed(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(0), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerBlue(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(0), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public int decreasePowerRedCalc(double percent)
    {
        return (int)(color.getRed()*(1-percent));
    }

    public int decreasePowerBlueCalc(double percent)
    {
        return (int)(color.getBlue()*(1-percent));
    }

    public int decreasePowerGreenCalc(double percent)
    {
        return (int)(color.getGreen()*(1-percent));
    }

    public void encounterColor(Color c)
    {
        int newRed = color.getRed();
        int newBlue = color.getBlue();
        int newGreen = color.getGreen();

        if (newRed > c.getRed())
            newRed = c.getRed();
        if (newBlue > c.getBlue())
            newBlue = c.getBlue();
        if (newGreen > c.getGreen())
            newGreen = c.getGreen();

        Color temp = new Color(newRed, newBlue, newGreen);
        color = temp;
    }
}

Element Methods

import java.awt.Color;
import javax.vecmath.Vector3d;

public class Element {
    //instance variables 
    Color color;
    Vector3d normalVector;
    double absorbancy;

    public Element(Color c, Vector3d normal, double a)
    {
        color = c;
        normalVector = normal;
        absorbancy = a;
    }

    public double getAbsorbancy()
    {
        return absorbancy;
    }

    public Vector3d getNormal()
    {
        return normalVector;
    }

    public Color getColor()
    {
        return color;
    }
}

Conclusion

Is Photon Mapping worth the cost?
There is a lot of time that goes into the calculation passes for photon mapping, but there is also a sense of realism when these scenes finish rendering. This sense of reality is the goal of global illumination and I believe the best way to attain this reality is photon mapping.

Photon Mapping has a hidden benefit in that once a scene is rendered the camera can be placed anywhere within the scene and the scene should still appear real. When Ray Tracers are used this is not possible.

The example of ray tracing that was shown above, with the cup, included Photon Mapping methods. Without these Photon Mapping methods image effects like caustics would not be possible without manually faking it. Even though faking is necessary, manual faking is very costly in man-hours.

Photon Mapping may take a long time to render but once it does it correctly it looks very realistic. Therefore, I believe authenticity of Photon Mapping is worth the cost of time to render.