RGL
EPFL Logo

Unified Neural Encoding of BTFs

In Computer Graphics Forum (Proceedings of Eurographics 2020)

Left: We train a net­work to en­code/de­code BTF texels us­ing 77 ma­ter­i­als from the Bonn BTF data­base [WGK14]. (Cen­ter) Each texel’s ap­pear­ance is pro­jec­ted in­to a shared, 32-di­men­sion­al en­cod­ing space. Right: We show ren­der­ings for an un­seen ma­ter­i­al from each of the 7 classes in the data­base (bot­tom row of the data­base), rendered dir­ectly from the lat­ent pro­jec­tion.

Abstract

Real­ist­ic ren­der­ing us­ing dis­crete re­flect­ance meas­ure­ments is chal­len­ging, be­cause ar­bit­rary dir­ec­tions on the light and view hemi­spheres are quer­ied at render time, in­cur­ring large memory re­quire­ments and the need for in­ter­pol­a­tion. This ex­plains the de­sire for com­pact and con­tinu­ously para­met­rized mod­els akin to ana­lyt­ic BRD­Fs; however, fit­ting BRDF para­met­ers to com­plex data such as BTF texels can prove chal­len­ging, as mod­els tend to de­scribe re­stric­ted func­tion spaces that can­not en­com­pass real-world be­ha­vi­or. Re­cent ad­vances in this area have in­creas­ingly re­lied on neur­al rep­res­ent­a­tions that are trained to re­pro­duce ac­quired re­flect­ance data. The as­so­ci­ated train­ing pro­cess is ex­tremely costly and must typ­ic­ally be re­peated for each ma­ter­i­al. In­spired by au­toen­coders, we pro­pose a uni­fied net­work ar­chi­tec­ture that is trained on a vari­ety of ma­ter­i­als, and which pro­jects re­flect­ance meas­ure­ments to a shared lat­ent para­met­er space. Sim­il­arly to SVBRDF fit­ting, real-world ma­ter­i­als are rep­res­en­ted by para­met­er maps, and the de­coder net­work is ana­log to the ana­lyt­ic BRDF ex­pres­sion (also para­met­rized on light and view dir­ec­tions for prac­tic­al ren­der­ing ap­plic­a­tion). With this ap­proach, en­cod­ing and de­cod­ing ma­ter­i­als be­comes a simple mat­ter of eval­u­at­ing the net­work. We train and val­id­ate on BTF data­sets of the Uni­versity of Bonn, but there are no pre­requis­ites on either the num­ber of an­gu­lar re­flect­ance samples, or the sample po­s­i­tions. Ad­di­tion­ally, we show that the lat­ent space is well-be­haved and can be sampled from, for ap­plic­a­tions such as mi­pmap­ping and tex­ture syn­thes­is.

Text citation

Gilles Rainer, Abhijeet Ghosh, Wenzel Jakob, and Tim Weyrich. 2020. Unified Neural Encoding of BTFs. In Computer Graphics Forum (Proceedings of Eurographics) 39(2).

BibTeX
@article{Rainer2020Unified,
    author = {Gilles Rainer and Abhijeet Ghosh and Wenzel Jakob and Tim Weyrich},
    title = {Unified Neural Encoding of BTFs},
    journal = {Computer Graphics Forum (Proceedings of Eurographics)},
    volume = {39},
    number = {2},
    year = {2020},
    month = jun,
    doi = {10.1111/cgf.13921}
}