We develop rendering algorithms that simulate light to create realistic images of virtual worlds along with inverse rendering algorithms that go the opposite way and reconstruct 3D worlds from images. We disseminate our work through open source projects like the Mitsuba Renderer.
If we could backpropagate derivatives through a rendering algorithm, then it should be possible to employ some variant of gradient descent to run a rendering algorithm “in reverse” and reconstruct the world from images. This turns out to be surprisingly hard: rendering algorithms are very big programs, which makes naïve backpropagation slow and costly. We develop algorithms that exploit physical laws to be faster.
We build mathematical models and algorithms that capture the visual richness of the world. This involves analyzing samples in RGL's state-of-the-art measurement laboratory and simulating surface microstructure along with the spectrum and polarization of light.
We develop compilers that can transform descriptions of rendering and differentiable rendering tasks into efficient computational kernels for CPUs or GPUs with ray tracing hardware acceleration. Obtaining high performance requires kernel fusion, differentiation, and specialized optimization passes.
Delio Vicini successfully defended his Ph.D. thesis. Congrats, Dr. Vicini!
Merlin Nimier-David successfully defended his Ph.D. thesis. Congratulations, Dr. Nimier-David!
Kate Salesin is visiting RGL for a month. Kate is a NASA fellow doing research on differentiable polarimetric rendering for remote sensing applications. Welcome, Kate!