Levicky Rough Draft

Eric-Levicky

Photon Mapping - Rough Draft

Pixar Studios uses global illumination to make their animated movies. On Pixar’s website they list the steps of making a movie. The seventh step is where computers are brought into the equation. “Using the art department’s model packet – a set of informational drawings – the characters, sets and props are either sculpted by hand and then scanned in three-dimensionally or modeled in 3-D directly in the computer. They are then given ‘avitars,’ or hinges, which the animator will use to make the object or character move” (Pixar). Step thirteen of making a movie is rendering. “Pixar’s Renderfarm is a huge computer system that interprets the data and incorporates motion blur. Each frame represents 1/24 of a second of screen time and takes about six hours to render, though some frames have taken as many as ninety hours” (Pixar). Rendering a scene is a large piece of how long it takes to produce a movie.

Popular methods of global illumination consist of Z-Buffer, Ray Tracing, and Environment Mapping. Z-Buffer is where the elements are drawn based on distance from the camera, only those polygons that appear in the front are drawn (Benander). Ray Tracing is technique that shoots rays out from a single viewpoint and then calculates what color each ray should project based on what and how the ray bounces off elements (Benander). Environment Mapping uses the z-buffer technique except for reflective objects; the reflection is actually a texture. This texture is made by removing the object from the plane and taking a picture up, down, left, right, front, and back (Benander). Photon Mapping is another technique that renders extremely realistic scenes.

Introduction to Photon Mapping

Photon Mapping was developed by Henrik Jensen as an efficient alternative to raytracing techniques (Walters). This method separates illumination and the geometric calculations. This allows for the elaborate equations to be calculated and stored separately. The benefit to calculating and storing separately is that different techniques may be applied which may result in less time to render a scene (Walters). The first pass of this two-path method is to create the photon map (Walters). The photon map is the recorded interaction with elements in the scene (Purcell). The second pass is the rendering pass; this pass estimates diffuse indirect illumination (Purcell).

Benefits

Photon Mapping implements the use of a two-pass method. The two pass method tries to shorten the time to render by calculating the interaction with vectors and elements in the first pass and then in the second to render the image. The truth is that the first pass takes a very long time and that there are a ton of waisted calculations in it. For this reason a lot of people think ray tracers are better. However, ray tracers do not usually pick up ambient light or lighting effects from bent rays that emanate from the light source, otherwise known as caustics which is described below. Ray tracers have to fake the ambient light. Some Photon Mappers fake this as well, which means a lot less calculation, but those that calculate these interactions enough to get the ambient light without faking it look very real. An example of an image created using a ray tracer called yafaray is below. This is an open source ray tracer that renders very realistic images (Estévez).

gabichloft.jpg

As you can see, this image is very realistic and they did an excellent job of faking ambient light in this image. A download of this opensource program is available at Yafaray Download. Yafaray accounts for caustics, which is "light concentration produced by reflective and refractive objects" (Estévez). It also takes materials into consideration when rendering images, "ShinyDiffuse, Glossy, CoatedGlossy, and Glass" materials are the four core options, but these may be mixed to produce any material (Estévez). This method claims that it uses photon mapping, but it actually maps them backwards from traditional photon mapping. This means that each "photon" emanates from the camera or eye and then is beamed out until it reaches a light source; however, when it encounters a reflective object it takes the light source into account and renders the lighting correctly using Photon Mapping. The use of the Photon Mapping method in this situation increases the realness of the scene by including caustics. "Caustics are formed by light that is reflected or transmitted by a number of specular surfaces before interacting with a diffuse surface. Examples of caustics are the light patterns on the bottom of a swimming pool and light focused onto a table through a glass of cognac" (Jensen). An example of that follows (Estévez).

causticsexample.jpg

Traditional Photon Mapping does not have to change the way it renders a scene based on the material it encounters, it always does it the same way. "This can in general only be done using the photon map! Unless the specular surface is simple and backwards ray tracing is possible" (Jensen). If the ray tracer recognizes that there is a reflective, specifically glass, surface it uses the photon mapping method on it.

Drawbacks

A drawback to this method that Andrew Cantino, a student at Swarthmore College, noticed is, "very few photons actually hit transmissive or reflective objects, leading to poor photon density on diffuse surfaces for caustics" (Cantino). To improve this for his applications he manually sent out more rays onto those surfaces that reflect rays. This takes more of the approach of a ray tracer, but he was able to apply it because of the two pass process of Photon Mapping.

Another drawback is the time that it takes to render the scene when using photon mapping. It takes a long time to render a scene because the main drawback to Photon Mapping is wasted calculations. A mass of vectors are sent out from the light source without respect to where the viewer or camera is, there is no way to only calculate which vectors effect the camera using strictly Photon Mapping. This is where ray tracing is better. A ray tracer only calculates those rays that affect the result of what the camera sees.

Because of this massive amount of time it takes to render a scene, when using this method all scenes must be pre-rendered. Pre-rendering a scene means that all of the calculations and colors are written before the scene is displayed. Due to the multitude of calculations needed to create a scene real-time rendering is impossible, meaning every scene must be pre-rendered.

Ideology Behind Photon Mapping

Photon Emission

Photon emission starts from the light source. All Photons are initially sent out as rays of light, which are referred to as vectors (Purcell 2). There vectors are sent out in lots of different directions based upon which kind of light source is emitting the light. There are four different types of light sources: Diffuse Point Light, Spherical Light, Square Light, Complex Light (Walters). The number of photons emitted by each light is dependent on the number of light sources in the scene and the power of each light source. The more powerful a light source the more photons it emits (Walters).

lightSources.gif

For the Diffuse Point Light sources the goal is to emit photons uniformly in all directions. For area light sources, such as Spherical, Square, and Complex lights a random position on the surface is chosen and then a random direction is chosen to shoot the vector off in. The drawback to these is that most photons will be lost and never seen. For this reason a Projection Map is used. These optimize photon emission by directing photons towards important objects. "Projection maps are typically implemented as bitmaps that are warped onto a bounding shape for the light source where each bit determines whether geometry of importance is in that direction" (Walters). Projection maps are especially important for caustics. "Caustics are generated from focused light coming from specular surfaces and require a higher density of photons for an accurate radiance estimate" (Walters). Using a Projection Map is a way of faking results. In Jason Dengler's talk he said that if the calculations are all made and there is not faking, then it is wrong. It takes too much computing to actually generate a scene based solely on physics. For this reason, instead of shooting more vectors from the light sources, a projection map allows for more vectors to be focused towards elements of interest. This significantly cuts down on the amount of calculations that would have been made if that volume of vectors would have been projected in all directions.

Photon Scattering

Whenever a ray of light encounters an element part of the ray is absorbed, part is reflected, and part is refracted. This is accounted for in Photon Mapping by splitting photons. This means the original photon is copied with the power level of the original photon distributed between each child photon. This way depending on the surface the ray comes in contact with the ray can act differently. On more reflective elements the power associated with the photon that is absorbed will be quite small, the power associated with the reflected photon will be high, and the refracted photons will also not have high power values.

When part of photon is absorbed that means that it is no longer able to bounce around the scene or provide light. When a photon is reflected that means that the ray changes direction and that point on the element gets the color of whatever element the vector comes in contact with next. A photon is refracted that means that the photon is broken down into the different colors of the visible light spectrum and the vector is sent off in the appropriate direction.

This all takes a lot of computing, which means that if it is all done properly it is wrong. As an alternative, "Jensen advocates using a standard Monte Carlo technique called Russian Roulette. We use Russian Roulette to probabilistically decide whether photons are reflected, refracted or absorbed. Using this technique we reduce both the computational and storage costs while still obtaining the correct result" (Wilson). This faking of photons saves a lot of space and time when rendering these scenes.

Methods Needed in Photon Mapper Library

Point, vector, and matrix classes are a necessity for calculating a photon's interaction with the scene. These classes have already been written and currently live in the java library.

The point class, located at javax.vecmath.Point3d, would be necessary to locate a point in this three dimensional scene. Built in methods are viewable at point3d methods

Similar to the point class, the vector class can also be imported. Its location is javax.vecmath.Vector3d. Built in methods are viewable at vector3d methods

The Photon class will use the point class to keep track of where the photon is. It is imperative that this class has getters and setters for the point because this will be changed based upon the calculations made in a photon mapper. This class must also include a way to keep track of the power that is associated with each photon.

The Power class will keep track of what color to display. Therefore this class must import java.awt.Color.*. The Power class will keep track of the power by changing the color of the photon. When the photon no longer has power it will be black, which means all colors are zero. This class had methods to decrease the power of a photon by a percentage. This percentage will be passed in as a double between 0 and 1.

The goal of each photon is to keep track of its power level, and where it is in the scene. The photon will keep track of its color using the Power class. The photon will monitor whether it should continue calculations based upon the color of the photon. If it is black then it should no longer bounce around the scene. When the photon comes into contact with an element it mixes the color of the photon with the color of the encountered element. When a color is to be mixed with the color of the photon the greatest each color can be is that of the photon. Therefore if the red in the photon is 35 and it encounters an element with a red value of 150 the resulting value for red is 35. However, when the color of element is lower than the photon it takes that color. Therefore if the photon's red value is 210 and it encounters an element with a red value of 120 the resulting red value will be 120.

When a photon, attached to a vector encounters an element it will need to be able to get the Color from the element. Therefore the element class will ne a getColor method for each polygon. It will also need a getFace() method so that the normal vector can be gotten from the polygon. This determines how vectors bounce off the polygon, when bouncePhoton is called. The bouncePhoton method is a complex method that involves quite a bit of mathematics, but it will return the vector that the photon will become after encountering this element. A getAbsorbancy() method will be needed to determine how much of the power of each photon is absorbed by the element.

Code Exploration

The following is a recursive method in pseudo-code for how to do the first pass of the photon mapping method. It was obtained from Andrew Cantino who is a student studying Advanced Computer Graphics at Swarthmore College in Pennsylvania.

doPhotonMap(ray r, model m, int level)
{
    Intersect r with m, continue if an intersection was found
    If an object containing a photon map was intersected,
        and if this photon has bounced at least once (avoid direct illumination):
            Add this photon to the object's photon map
    If the intersected object is reflective and recursion depth has not been met:
        doPhotonMap(direction of reflection, m, level - 1)
    If the intersected object is transmissive and recursion depth has not been met:
        doPhotonMap(direction of transmission, m, level - 1)
}

Photon Class

import java.awt.Color;
import javax.vecmath.Point3d;

public class Photon {
    //instance variables
    Point3d location;
    Power rgbPower;

    public Photon(Point3d p, Power power)
    {
        location = p;
            rgbPower = power;
    }

    public Photon(Point3d p, Color c)
    {
        this(p, new Power(c));
    }

    public Photon(double x, double y, double z, Power p)
    {
        this(new Point3d(x,y,z), p);
    }

    public Photon(double x, double y, double z, Color c)
    {
        this(new Point3d(x,y,z), new Power(c));
    }

    public boolean continueBouncing()
    {
        if (rgbPower.isBlack())
            return false;
        return true;
    }

    public Point3d getLocation()
    {
        return location;
    }

    public void setLocation(Point3d loc)
    {
        location = loc;
    }

    public void setLocation(double x, double y, double z)
    {
        Point3d pt = new Point3d(x,y,z);
        location = pt;
    }

    public Power getPower()
    {
        return rgbPower.getPower();
    }

    public void setPower(Power p)
    {
        rgbPower.setPower(p);
    }

    public void encounterElement(Power p)
    {
        rgbPower.encounterColor(p.getColor());
    }

    public void encounterElement(Color c)
    {
        rgbPower.encounterColor(c);
    }
}

Power Class

import java.awt.Color;
public class Power {
    //instance variables
    Color color;

    public Power(Color c)
    {
        color = c;
    }

    public Color getColor()
    {
        return color;
    }

    public boolean isBlack()
    {
        if ( color == newColor(0,0,0))
            return true;
        return false;
    }

    public void setPower(Color newPow)
    {
        color = newPow;
    }

    public void setPower(Power newPow)
    {
        color = newPow.getColor();
    }

    public Power getPower()
    {
        return this;
    }

    public void decreasePowerRedGreenBlue(double percent) 
    //percent will be a number between 0 and 1
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerBlueGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerRedGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(0), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public void decreasePowerRedBlue(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerRed(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(percent), decreasePowerBlueCalc(0), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerBlue(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(percent), decreasePowerGreenCalc(0));
        color = temp;
    }

    public void decreasePowerGreen(double percent)
    {
        Color temp = new Color(decreasePowerRedCalc(0), decreasePowerBlueCalc(0), decreasePowerGreenCalc(percent));
        color = temp;
    }

    public int decreasePowerRedCalc(double percent)
    {
        return (int)(color.getRed()*(1-percent));
    }

    public int decreasePowerBlueCalc(double percent)
    {
        return (int)(color.getBlue()*(1-percent));
    }

    public int decreasePowerGreenCalc(double percent)
    {
        return (int)(color.getGreen()*(1-percent));
    }

    public void encounterColor(Color c)
    {
        int newRed = color.getRed();
        int newBlue = color.getBlue();
        int newGreen = color.getGreen();

        if (newRed > c.getRed())
            newRed = c.getRed();
        if (newBlue > c.getBlue())
            newBlue = c.getBlue();
        if (newGreen > c.getGreen())
            newGreen = c.getGreen();

        Color temp = new Color(newRed, newBlue, newGreen);
        color = temp;
    }
}

Element Methods

import java.awt.Color;
import javax.vecmath.Vector3d;

public class Element {
    //instance variables 
    Color color;
    Vector3d normalVector;
    double absorbancy;

    public Element(Color c, Vector3d normal, double a)
    {
        color = c;
        normalVector = normal;
        absorbancy = a;
    }

    public double getAbsorbancy()
    {
        return absorbancy;
    }

    public Vector3d getNormal()
    {
        return normalVector;
    }

    public Color getColor()
    {
        return color;
    }
}

Conclusion

Is Photon Mapping worth the cost? There is a lot of time that goes into the calculation passes for photon mapping, but there is also a sense of realism when these scenes finish rendering. Photon Mapping has a hidden benefit in that once a scene is rendered the camera can be placed anywhere within the scene and the scene should still appear real. When Ray Tracers are used this is not possible. Also, the example of ray tracing that was shown above, with the cup, included Photon Mapping methods. Without these Photon Mapping methods image effects like caustics would not be possible without manually faking it. Even though faking is necessary, manual faking is very costly in man hours. Photon Mapping may take a while to render but once it does it correctly it looks very realistic. Therefore, Photon Mapping is worth the cost of time to render.

My Reaction To Photon Mapping

I wish I had minored in math! I thoroughly enjoyed researching about photon mapping. I would love to work for pixar or a company that does this. I really enjoy working with graphics. By no means am I an artist, but I enjoy dabbling in graphics. I wish I understood matrices more, the brief introduction in high school was not enough.