Realistic Graphics Lab

Neural BTF Compression and Interpolation

Conditionally accepted to Computer Graphics Forum (Proceedings of Eurographics 2019)

BTF ren­der­ings of a chal­len­ging spec­u­lar fab­ric (shantung). At the same com­pres­sion ra­tio, our neur­al BTF ap­prox­im­a­tion is able to cap­ture subtle sur­face vari­ations and an­iso­tropy that are lost by prin­cip­al com­pon­ent ana­lys­is-based com­pres­sion.


The Bi­d­irec­tion­al Tex­ture Func­tion (BTF) is a data-driv­en solu­tion to render ma­ter­i­als with com­plex ap­pear­ance. A typ­ic­al cap­ture con­tains tens of thou­sands of im­ages of a ma­ter­i­al sample un­der vary­ing view­ing and light­ing con­di­tions. While cap­able of faith­fully re­cord­ing com­plex light in­ter­ac­tions in the ma­ter­i­al, the main draw­back is the massive memory re­quire­ment, both for stor­ing and ren­der­ing, mak­ing ef­fect­ive com­pres­sion of BTF data a crit­ic­al com­pon­ent in prac­tic­al ap­plic­a­tions. Com­mon com­pres­sion schemes used in prac­tice are based on mat­rix fac­tor­iz­a­tion tech­niques, which pre­serve the dis­crete format of the ori­gin­al data­set. While this ap­proach gen­er­al­izes well to dif­fer­ent ma­ter­i­als, ren­der­ing with the com­pressed data­set still re­lies on in­ter­pol­at­ing between the closest samples. De­pend­ing on the ma­ter­i­al and the an­gu­lar res­ol­u­tion of the BTF, this can lead to blur­ring and ghost­ing arte­facts. An al­tern­at­ive ap­proach uses ana­lyt­ic mod­el fit­ting to ap­prox­im­ate the BTF data, us­ing con­tinu­ous func­tions that nat­ur­ally in­ter­pol­ate well, but whose ex­press­ive range is of­ten not wide enough to faith­fully re­cre­ate ma­ter­i­als with com­plex non-loc­al light­ing ef­fects (sub­sur­face scat­ter­ing, inter-re­flec­tions, shad­ow­ing and mask­ing...). In light of these ob­ser­va­tions, we pro­pose a neur­al net­work-based BTF rep­res­ent­a­tion in­spired by au­toen­coders: our en­coder com­presses each texel to a small set of lat­ent coef­fi­cients, while our de­coder ad­di­tion­ally takes in a light and view dir­ec­tion and out­puts a single RGB vec­tor at a time. This al­lows us to con­tinu­ously query re­flect­ance val­ues in the light and view hemi­spheres, elim­in­at­ing the need for lin­ear in­ter­pol­a­tion between dis­crete samples. We train our ar­chi­tec­ture on fab­ric BT­Fs with a chal­len­ging ap­pear­ance and com­pare to stand­ard PCA as a baseline. We achieve com­pet­it­ive com­pres­sion ra­tios and high-qual­ity in­ter­pol­a­tion/ex­tra­pol­a­tion without blur­ring or ghost­ing ar­ti­facts.