Realistic Camera Simulation
Due Date: May 7th, 2015 by 11:59PM PDT.
Most rendering systems generate images with the entire scene in sharp focus, mimicking a pinhole camera. However, real cameras contain multi-lens assemblies with finite apertures and exhibit different imaging characteristics such as limited depth of field, field distortion, vignetting and spatially varying exposure. In this assignment, you'll extend
pbrt with support for a more realistic camera model that accurately simulates these effects.
We will provide you with specifications of real wide-angle, normal and telephoto lenses, each composed of multiple lens elements. You will build a camera plugin for
pbrt that simulates the traversal of light through these lens assemblies onto the film plane of a virtual camera. With this camera simulator, you'll explore the effects of focus, aperture and exposure. Once you have a working camera simulator, you will add simple auto-focus capabilities to your camera.
Before beginning this assignment you should read the paper A Realistic Camera Model for Computer Graphics by Kolb, Mitchell, and Hanrahan. This paper is one of the assigned course readings. You may also want to review parts of Chapter 6 in
Download the starter code and data files for Assignment 3. In addition to source code, this archive contains the pbrt scene files you will render in this assignment, a collection of lens data files (
*.dat), and auto-focus zone info files (
You'll need to drop in some replacement
pbrt files that have slight modifications to support autofocusing for a particular scene. These files, along with skeleton code for the realistic camera simulation are in the
src/ directory of the starter code. Simply replace the corresponding files in the
pbrt directory with these replacements to get started. You should make sure this code compiles before beginning your modifications.
Browse the Code
In this assignment you will implement the
RealisticCamera class defined in
realistic.cpp. The other files provided simply ensure that
AutoFocus is called at the right time.
Set Up the Camera
pbrt scenes in this assignment specify that rendering should use the "realistic" camera class. The realistic camera accepts a number of parameters from the scene file including the name of a lens data file, the distance between the film plane and the location of the back lens element (the one closest to the film), the diameter of the aperture stop, and the length of the film diagonal (distance from top left corner to bottom right corder of the film). The values of these parameters are passed in to the constructor of the
RealisticCamera class. All values are in units of millimeters. For example, a scene file might specify the following camera:
Camera "realistic" "string specfile" "dgauss.50mm.dat" "float filmdistance" 36.77 "float aperture_diameter" 17.1 "float filmdiag" 70
*.dat files included with the starter code describe camera lenses using the format described in Figure 1 of the Kolb et al. paper. The
RealisticCamera class must read and parse the specified lens data file. In
pbrt, a camera's viewing direction is the positive z-direction in camera space. Therefore, your camera should be looking directly down the z-axis.
The first lens element listed in the file (the lens element closest to the world, and farthest from the film plane) should be located at the origin in camera space with the rest of the lens system and film plane extending in the negative-z direction. Each line in the file contains the following information about one spherical lens interface.
lens_radius z-axis_intercept index_of_refraction aperture
lens_radius: the spherical radius of the element.
z_axis_intercept: thickness of the element. That is, the distance along the z-axis (in the negative direction) that separates this element from the next.
index of refraction: the index of refraction on the camera side of the interface.
aperture: the diameter of the aperture of the interface (thus, rays that hit the interface a distance farther than
aperture/2from the center of the aperture don't make it through the lens element)
Note that exactly one of the lines in the data file will have
lens_radius = 0. This is the aperture stop of the camera. The maximum size is given by the aperture value on this line. The actual size of the aperture stop is given as a parameter to the realistic camera from the
pbrt scene file. Also note that the index of refraction of the world side of the first lens element is 1 (it's air).
Generating Camera Rays
Next you'll need to implement the
RealisticCamera::GenerateRay() method. The
GenerateRay() method takes a sample position in image space (given by
sample.imageY) as an argument and should return a random ray into the scene. To the rest of
pbrt, your camera looks just like any other camera; it just takes a sample position and returns a ray from the camera out into the world. Here's an outline of the main steps.
- Compute the position on the film plane that the ray intersects from the values of
- Remember that the color of a pixel in the image produced by
pbrtis proportional to the irradiance incident on a point on the film plane (think of the film as a sensor in a digital camera). This value is an estimate of all light reaching this pixel from the world and through all paths through the lens array. As stated in the paper, computing this estimate involves sampling radiance along this set of paths. The easiest way to sample all paths is to fire rays at the back element of the lens and trace them out of the camera by computing intersections and refractions at each lens interface (you will not be using the thick lens approximation from the paper to compute the direction of rays exiting the lens). Note that some of these rays will hit the aperture stop and terminate before exiting the front of the lens.
GenerateRay()returns a weight for the generated ray. The radiance incident along the ray from the scene is modulated by this weight before adding its contribution to the
Film. You will need to compute the correct weight to ensure that the irradiance estimate produced by pbrt is unbiased. That is, the expected value of the estimate is the actual value of the irradiance integral. Note that the weight depends upon the sampling scheme you use.
- Render each of the four scenes (
hw3_telephoto.pbrt) using your realistic camera simulator. Example images rendered with both 4 and 512 samples per pixel are given below: telephoto (top left), double gauss (top right), wide angle (bottom left) and fisheye (bottom right). Notice that the wide angle image is especially noisy -- why is that? ''Hint: look at the ray traces at the top of the page.''
4 Samples Per Pixel
512 Samples Per Pixel
Helpful Tips To Get You Started
ConcentricSampleDisk()is a useful function for converting two 1D uniform random samples into a uniform random sample on a disk (see page 667 of the PBRT book.)
- You'll need a data structure to store the information about each lens interface as well as the aperture stop. For each lens interface, determine how to test for intersection and how to determine how rays refract according to the change of index of refraction on either side (review Snell's law).
- For rays that terminate at the aperture stop, return a ray with a weight of 0 -- pbrt tests for such a case and will terminate the ray instead of sending it out into the scene.
- As is often the case in rendering, your code won't produce correct images until everything is working just right. Try to think of ways that you can modularize your work and test incrementally. Use assertions liberally to verify that your code is doing what it should at each step. It may be worth your time to produce a visualization of the rays refracting through your lens system as a debugging aid (compare to those at the top of this web page).
- Be mindful of coordinate systems! Confusion between world space and camera space can be a major source of bugs. The scene is set up to make sure the camera is appropriately sized. Due to this setup, the CameraToWorld transform will have a scale factor in it. PBRT expects that rays coming from a camera will have normalized direction components (d).
To ensure that the scale factor does not de-normalize the direction you need to renormalize the redirection of a ray after doing the CameraToWorld transform:
CameraToWorld(camera_space_ray,ray); //CameraToWorld has a scale factor: ray->d = Normalize(ray->d);
The auto-focus mechanism in a modern digital camera samples light incident on subregions of the sensor (film) plane and analyzes light in these subregions to compute focus. These regions of the frame are called auto-focus zones (AF zones). For example, an auto-focus algorithm might look for the presence of high frequencies (sharp edges in the image) within an AF zone to determine that the image is in focus. You may have noticed the AF zones in the viewfinder of your own camera. As an example, the AF zones used by the auto-focus system in the Nikon D200 are shown below.
In this part of the assignment, you'll be implementing an auto-focus algorithm for your
RealisticCamera. We will provide you a scene and a set of AF zones, and you will need to use these zones to automatically determine the film depth for your camera so that the scene is in focus. Notice that in
hw3_afdgauss_closeup.pbrt, the camera description contains an extra parameter
af_zones. This parameter specifies the text file that contains a list of AF zones. Each line in the file defines the bottom left and top right of a rectangular zone using four floating point numbers:
xleft xright ytop ybottom
These coordinates are relative to the top left corner of the film (the numbers will fall between
1.0). For example, a zone spanning the entire film plane would be given by
0.0 1.0 0.0 1.0. A zone spanning the top left quadrant of the film is
0.0 0.5 0.0 0.5.
You will now need to implement the
AutoFocus() method of the
RealisticCamera class. In this method, the camera should modify its film depth so that the scene is in focus.
There are many ways to go about implementing this part of the assignment. One approach is to shoot rays from within AF zones on the film plane out into the scene (essentially rendering a small part of the image) and then analyze the subimage to determine if it is in focus. The starter code provided is intended to help you implement auto-focus in this manner. Take a look at the "Sum-Modified Laplacian" operator described in Sree Nayar's Shape From Focus paper as an example of a sharpness heuristic.
To test your auto-focusing algorithm, we provide three scenes that require the camera to focus using a single AF zone. The images resulting from proper focusing on
hw3_aftelephoto.pbrt are shown below (rendered at 512 samples per pixel). The location of the AF zone in each image is shown as a white rectangle.
Fancier Auto-Focus (Not Required)
We have also provided scenes
hw3_bunnies.pbrt that are constructed so that there is a choice of which object to bring into focus. We have defined multiple auto-focus zones for these scenes. How might you modify your auto-focus algorithm to account for input from multiple zones? Many cameras choose to focus on the closest object they can bring into focus or have "modes" that allow the user to hint at where focus should be set. For example, you might want to add an additional parameter to your camera that designates whether to focus on close up or far away objects in the scene.
Additional Hints and Tips
- When generating subimages corresponding to the AF Zones, it will be important that you use enough samples per pixel to reduce noise that may make it difficult to determine the sharpness of the image. By experimentation, we've found that 256 total samples (16x16) using the Sum-Modified Laplacian gives stable results.
- Although the auto-focusing approach described here involves analyzing the image formed within each AF zone, an alternative approach would be to compute an initial estimate of focus using the depth information of camera ray intersections with scene geometry. Using the thick lens approximation from the Kolb paper, you might be able to compute focus more efficiently than the approach described above.
- Note that the autofocus method contains (commented out) code to dump the current image to disk. This can be very useful in debugging.
- Challenge! See how fast and how robust you can make your auto-focusing algorithm. How might you minimize the number of times you re-render a zone? How can you limit the range of film depths you search over? Can you think of better heuristics than the Sum-Modified Laplacian to estimate sharpness?
Extra Credit: Bokeh
Once you have a working lens implementation, you can also play around with a more realistic aperture. In real cameras, the aperture typically has several blades that form a shape similar to a circle but not exactly. The blocking caused by the polygonal outline of the aperture creates interesting patterns sometimes referred to as Bokeh. Make any aperture style you like (common ones include several polygonal blades that with higher counts approximate a circle) including cutesy shapes like stars or hearts. You may want to create a new scene with bright light sources out of focus in order to see the effect (you most commonly see Bokeh when street lamps or lights are heavily out of focus).
What You Need To Submit
In addition to the code, you'll need to submit renderings for each of the four scenes (
hw3_fisheye.pbrt) at both 4 and 512 samples per pixel. You should also re-render and submit
hw3_telephoto.pbrt with the aperture radius decreased by half. For the auto-focus part of the assignment, please submit renderings for
hw3_aftelephoto.pbrt at the film depth computed by your auto-focus routine with at least 256 samples per pixel and include the film plane depths your algorithm computed for each of the three scenes in your writeup.
Your writeup should thoroughly describe your camera implementation and auto-focus implementation. It should also answer the following questions:
- What radiometric quantity is pbrt computing in your camera simulation? In other words: the color of each pixel in the output image is proportional to what quantity?
- Write an integral expression to compute the quantity described in question 1 at a point X on the film plane. Please precisely define all variables.
- Describe the domain of integration from question 2.
- Give the formula for F_n, a Monte Carlo estimator for the value of your integral. F_n is an estimate of the value of the integral using n samples drawn from the domain described in question 3.
- How did you draw samples from the domain of integration? Are you certain that you sampled from the space of all paths that light may have traveled through the lens? Describe the probability distribution used to generate random samples.
- Describe how pbrt computes F_n using your
hw3_telephoto.pbrtis rendered with its aperture decreased by half, what are the two main effects you expect to see? Does your camera simulation produce this result? By decreasing the aperture radius by one half, by how many stops have you decreased the resulting photographs exposure?
To submit your work, create a
.tar.gz file that contains:
- Your writeup with answers to questions listed in the "What You Need To Submit" section above.
- Your camera implementation code (
realistic.cppand all other files you added or modified).
- All the images listed in the "What You Need to Submit" section, clearly labeled.
- Also, feel free to submit any other cool images you generate.
- A description of the extra-credit you attempted, if any.
Use the assignment submission page to upload and submit your work. You can submit multiple times if you wish to make changes after your initial submission. Code can be submitted using the script on corn (see Piazza).
This assignment will be graded on a 4 point scale:
- 1 point: Significant flaws in camera simulation
- 2 points: Camera simulation works (or contains minor flaws), no implementation of auto-focus
- 3 points: Code correct but writeup does not address all points listed above
- 4 points: Code correct and a clear, complete writeup
- Extra Credit: We are happy to give extra credit for either a really neat auto-focus improvement or a working bokeh implementation.