Final Report, Apollo: Augmenting Light Functions to Simulate Wave-Based Diffraction

Final Project in CS184 : Foundations of Computer Graphics

Team Number 54

Final Report : https://satish.dev/projects/184_final/

Final Video : https://youtu.be/q__vekWq_io

Final Presentation : Canva

Github Repo : Github


Milestone : https://satish.dev/projects/184_milestone/

Slides : https://docs.google.com/presentation/d/1Xtp83b7pyEKYRISCwhU9ZiQJkb9-J2A-jG7GtQ50qKQ/edit?usp=sharing

Video Link: https://youtu.be/FEWRREKrkVo


Proposal Link : https://satish.dev/projects/184_proposal/

Abstract

In this project, we explore the simulation of light diffraction using a pathtracer. We implement a method to simulate Young’s double-slit experiment as well as shaped aperture diffraction patterns. Our approach is based on the paper “Rendering Wave Effects with Augmented Light Field” and “Augmenting Light Field to model Wave Optics effects” by Se Baek Oh et al. The paper proposes a method to augment the classic light field with “virtual light projectors” which can create variable radiance values, including negative radiance. We extend the pathtracer to support these virtual light projectors, allowing us to simulate diffraction patterns and interference effects.

We also modified the pathtracer to perform a full wave simulation of rays, allowing us to calculate the phase of each ray based on its wavelength, and switching from a scalar radiance summation to a complex amplitude summation. This enables us to accurately simulate the interference patterns produced by multiple light sources, without the use of virtual light projectors.

Our results show that the pathtracer is capable of accurately simulating these phenomena, providing a powerful tool for exploring the wave nature of light.

“[The double-slit experiment] has in it the heart of quantum mechanics. In reality, it contains the only mystery. —Richard Feynman

Initial Analysis

We began with an analysis of the pathtracer from the end of Homework 3, testing out some of the features that we would need to implement and seeing how they rendered prior to implementation. This exposed some unique intricacies of the pathtracer that we did not initially expect!

To set up the basis for our experimentation, we constructed a double slit experiment in Blender using 3 planes (2 long skinny ZY planes mirroring a short ZY plane in the Y-Axis) to test the interference patterns of our 3 differently valued lights. The lights were arranged in a horizontal line and from left to right, were applied with light strengths of 1, -0.3, and 0.1.

</div>

The closer darker green rectangle is our slit plane, and the background with the colors is a plane further in the X-Axis to capture their interference patterns. The variety of colors that was reflected on the wall was very interesting and not initially expected. While negative radiance is not a property of normal physics, the pathtracer handled negative lights in an understandable way, subtracting the contribution instead of adding. With the knowledge that there was no deeply embedded prevention of negative radiances, we knew we were on a good path to implementing our diffraction simulation.

Slit Colors
Initial Slit Coloring
Slit Blender
Blender Slit Setup
Negative 0.3 1
View 1
Negative 0.3 2
View 2
Negative 0.3 3
View 3

Technical Approach

Introduction

Upon reading both papers, we realized that the simulation of diffraction was not as simple as we had initially thought. Our normal pathtracer treats light as incoherent, “a superposition of infinite numbers of plane waves with random phase delays” . This means that the light reaches the surface with completely random phase delays, and thus the light rays do not interfere with each other. This is a good approximation for most cases, but it does not accurately capture the wave nature of light.

In order to simulate interference patterns, we need to understand the coherence of light: the property of light rays that enables them to interfere with each other. The most intuitive way to represent this in our pathtracer was to use point sources, which can be treated as a coherent light emitter, where each wavefront from the source shares the same phase.

Our implementation in the pathtracer consists of two complementary modes of simulating diffraction and interference, a wave based integrator and an augmented light field with negative lights. There is a flag in the program to select between both branches. Both strategies rely on similar ideas of computing the wave intensity equation, but how they approximate the physical complexities of light through apertures and over distances is different.

In our classic Light Field, we project the u-axis down onto X-Axis to compute the intensity at that location. As we can see, in the WDF, there is a 3rd “virtual projector” that contains a changing phase value, which when projected down onto the X-Axis at the screen, does not give us a constant intensity value, but rather a periodic function that oscillates between 0 and a maximum value. The “virtual projector” is a point source that emits light with radiance values that mimic this periodic function, while our wave based integrator calculates the phase information to compute the intensity at each point in space.

WDFvsLF
WDF vs LF
WDF_Wavefront
WDF of Wavefront

Implementation

The implementation of the wave based integrator and the augmented light field with negative lights was done in the pathtracer.cpp file. The most impactful changes were made in the PathTracer::estimate_direct_lighting_importance function, where we compute the radiance of our ray-scene intersection.

When waveMode == true, each ray carries a complex amplitude instead of a radiance value. After intersecting with the scene, we initialize two accumulators to store the real and imaginary components of the electric field at the camera pixel. The radiance is then converted into field amplitude A = L_i * amplitude_scaling, where amplitude_scaling is user-defined to reveal the diffraction pattern. The wave number k = 2π/λ is computed by reading the light’s wavelength in nanometers with light->nanometer(), scaling it into scene units with nanometer_scaling, and multiplying by M_PI. The accumulated optical phase for propagation is then φ = k * distToLight. We then split this into real and imaginary contributions using Euler’s formula, which initializes E_re = A * cos(φ) * (invR) and E_im = A * sin(φ) * (invR). Although E decays as 1/r, we hold invR at 1 for image demonstrations. To integrate over all paths, we weight each field sample by the BSDF, geometry term, and PDF. After summing accum_re and accum_im across all lights and samples, the final radiance output is reconstructed as I = |E|² = re² + im² per color channel. This direct phase accumulation yields high-fidelity and fast rendering fringes that respect both Fresnel and Fraunhofer regimes.

When waveMode == false, the code falls back to an ALF-style convolution using “negative” lights to create the interference patterns. First, in our scene creation, we create negative lights between every pair of positive lights. This is done by injecting additional SceneObject::PointLights into the lights used to render the scene. These Virtual Projectors together represent an approximation of the WDF. In the double pinhole case, this approximation is accurate, but requires high density for approximations when mimicking apertures. In this branch if light->is_negative() returns true, the code treats that sample as a virtual projector. The virtual projector’s radiance is calculated by extending the 1D term from the paper 2cos(2π[a-b](θ/λ) to 3 dimensions. It first computes δ = dot(light->diff(), w_in), where light->diff() encodes the vector between the two real lights the virtual projector is between. Multiplying by the wave number k yields the phase difference between rays passing through each slit (phase = k * δ). The amplitude modulation term mod = 2·cos(phase) results in a variable modulation to our applied radiance. We then form a signed radiance vector Lm = L_i * mod * amplitude_scaling and subtract it (-Lm) in the outgoing sum, effectively carving out the occluded geometry while leaving the diffracted wave intact.

Problems Encountered

The scenes we chose to model (double slit and various shaped apertures), were chosen both as a goal for us to render but additionally as feasible with both our wave based accumulator and our virtual projectors. We encountered issues with understanding the complex nature of light, and the challenges the paper was trying to overcome. In our initial attempt, we directly implemented the wave accumulator and defined our virtual projectors in Blender, which worked for simple scenes. Upon reflection at 3AM on Sunday, a difference was noted with our double slit pattern and the actual solution, and when removing the virtual projector, the correct pattern emerged. This was an indication that we had incorrectly understood the significance of the virtual projector, and upon further examination we were able to change more sections of the codebase to implement the ideas of the virtual projector and retain our wave accumulator to compare between both patterns. We found they had extremely similar solutions, which differed based on our modifications to nanometer scaling or amplitude scaling (which in the ALF increased contrast, and increased overall light in Wave based). Unfortunately, the ALF did not offer any speed improvements for us, as the number of virtual projectors quadratically increased with the higher densities required for aperture diffraction. However, the overall quality of the diffraction pattern was significantly better and much more vibrant thanks to the amplitude scaling affecting the contrast.

There were many challenges in injecting our virtual projectors as well, requiring us to step between the DAE parsing and SceneObject creation to insert new objects. Unfortunately, certain classes were very similarly named creating difficulty in understanding initially the flow of information to create the scene. After a few hours, the entire process was understood and additional variables were added to keep track of necessary parameters for the Augmented Light Field’s virtual projectors and the nanometer of the light.

The difficulty of the project prevented us from going further in the ALF implementation, however we did attempt to implement wave-based BSDF sampling as described in . This was a very interesting idea which built atop ALF, but we were unable to implement it in time.

Learning Outcomes

Overall, these code additions faithfully implement light diffraction within the standard path-tracing framework. We learned a lot about the nature of light and the complexities of simulating it and the numerous ways to approach the problem. Extending the scene creation of the pathtracer additionally helped teach us the importance of understanding the flow of information, to keep track of variables, classes, and function calls, and the importance of clear naming conventions. Our issues with the virtual projectors were a great learning opportunity to consistently check our assumptions and question whether our understanding was correct, Oh et al.’s paper continued to reveal more information with every read, and because of our insistence to challenge our own understandings, we were able to catch our mistakes and correct them.

Results

The results of our implementation are shown below. We included details regarding the scaling of the wavelength and amplitude, as well as the time taken to render each image. The first row shows the results of the double slit experiment, where we can see the interference pattern created by the two slits. The following rows show the results of various shaped apertures, including a square, circle, and octagon. Notably, the circular aperture produces a very accurate Airy disk pattern, which is a well-known diffraction pattern produced by circular apertures. The square aperture additionally aligns with the expected diffraction pattern.

Slit Green Screenshot Wave NM 5 AM 1
Slit - Wave, NM 5, AM 1, 0.13s
Slit Green Screenshot ALF NM 5 AM 1
Slit - ALF, NM 5, AM 1, 0.14s
Slit All Screenshot Wave NM 5 AM 4
Slit - Wave, NM 5, AM 4, 0.57s
Slit All Screenshot ALF NM 5 AM 1
Slit - ALF, NM 5, AM 1, 0.6s
Square All Screenshot ALF NM 4.5 AM 75
Square - ALF, NM 4.5, AM 75, 1894s
Square All Screenshot Wave NM 4.5 AM 40
Square - Wave, NM 4.5, AM 40, 26s
Square Red Far Screenshot ALF NM 5 AM 75
Square - ALF, NM 5, AM 75, 470s
Square Red Far Screenshot Wave NM 5 AM 79
Square - Wave, NM 5, AM 79, 7s
Square Green Screenshot ALF NM 5 AM 50
Square - ALF, NM 5, AM 50, 398s
Square Green Screenshot Wave NM 5 AM 50
Square - Wave, NM 5, AM 50, 19s
Circle Green Screenshot ALF NM 4.5 AM 25
Circle - ALF, NM 4.5, AM 25, 3375s
Circle Green Screenshot Wave NM 4.5 AM 25
Circle - Wave, NM 4.5, AM 25, 53s
Circle All Wave NM 4.5 AM 20
Circle - Wave, NM 4.5, AM 20, 11s
Circle All ALF NM 4.5 AM 50
Circle - ALF, NM 4.5, AM 50, 1073s
Octagon Green Wave NM 4.5 AM 15
Octagon - Wave, NM 4.5, AM 15, 11s
Octagon Green ALF NM 4.5 AM 100
Octagon - ALF, NM 4.5, AM 100, 812s
Octagon All Wave NM 4.5 AM 10
Octagon - Wave, NM 4.5, AM 10, 41s
Octagon All ALF NM 4 AM 125
Octagon - ALF, NM 4, AM 125, 3773s
Triangle Green Screenshot Wave NM 5 AM 10
Triangle - Wave, NM 5, AM 10, 854s
Triangle Green Screenshot ALF NM 5 AM 10
Triangle - ALF, NM 5, AM 10, 854s
Horizontal Green Screenshot Wave NM 5 AM 5
Horizontal - Wave, NM 5, AM 5, 75s
Horizontal Green Screenshot ALF NM 5 AM 5
Horizontal - ALF, NM 5, AM 5, 11s

Here is a gif of the rendering process.

Render
Rendering Process

Goals

Our reference diffraction patterns were the following:

Farzone
Square Aperture
Airy Disk
Circular Aperture
White Diffraction
White Diffraction
Wave
All color square diffraction

References

  1. Oh, Se Baek, George Barbastathis, and Ramesh Raskar. Augmenting Light Field to model Wave Optics effects. Preprint, Massachusetts Institute of Technology, July 2009. https://arxiv.org/abs/0907.1545

  2. Oh, Se Baek, Sriram Kashyap, Rohit Garg, Sharat Chandran, and Ramesh Raskar. Rendering Wave Effects with Augmented Light Field. Computer Graphics Forum, vol. 29, no. 2, pp. 507–516, June 2010. DOI: 10.1111/j.1467-8659.2009.01620.x. https://dspace.mit.edu/handle/1721.1/66536

  3. Cuypers, Tom, Se Baek Oh, Tom Haber, Philippe Bekaert, and Ramesh Raskar. Ray-Based Reflectance Model for Diffraction. Preprint, 2011. http://arxiv.org/abs/1101.5490

Contributions

  • Naman Satish - Developed the Wave and ALF simulation and adaption from 1D to 3D, injection of negative projector creation, and initial Blender scene creation and scripting.

  • Nicholas Tran - Blender scene generation scripting, scene creation, Wave BSDF attempt, presentation slides.

  • Kevin Tseng - Wave BSDF replication attempt, mathematical research, video script.

  • Cashus Puhvel - Pathtracer infrastructure exploration, extensive wave BSDF attempt which ended up being incompatible with existing pathtracer code, helped with writeup, video editor.