We develop rendering algorithms that simulate light to create realistic images of virtual worlds along with inverse rendering algorithms that go the opposite way and reconstruct 3D worlds from images. We disseminate our work through open source projects like the Mitsuba Renderer.
Topics
If we could backpropagate derivatives through a rendering algorithm, then it should be possible to employ some variant of gradient descent to run a rendering algorithm “in reverse” and reconstruct the world from images. This turns out to be surprisingly hard: rendering algorithms are very big programs, which makes naïve backpropagation slow and costly. We develop algorithms that exploit physical laws to be faster.
We build mathematical models and algorithms that capture the visual richness of the world. This involves analyzing samples in RGL's state-of-the-art measurement laboratory and simulating surface microstructure along with the spectrum and polarization of light.
We develop compilers that can transform descriptions of rendering and differentiable rendering tasks into efficient computational kernels for CPUs or GPUs with ray tracing hardware acceleration. Obtaining high performance requires kernel fusion, differentiation, and specialized optimization passes.
News
Rami Tabbara joins RGL as a research engineer. He will participate in the development of the lab's next-generation differentiable rendering software infrastructure. Welcome, Rami!
Delio Vicini successfully defended his Ph.D. thesis. Congrats, Dr. Vicini!