RGL
EPFL Logo

Visu­al Com­put­ing Sem­in­ar (Fall 2017)

Wednesday
Food @ 11:50am,
Talk @ 12:15am

General information

The Visu­al com­put­ing sem­in­ar is a weekly sem­in­ar series on top­ics in Visu­al Com­put­ing.

Why: The mo­tiv­a­tion for cre­at­ing this sem­in­ar is that EPFL has a crit­ic­al mass of people who are work­ing on subtly re­lated top­ics in com­pu­ta­tion­al pho­to­graphy, com­puter graph­ics, geo­metry pro­cessing, hu­man–com­puter in­ter­ac­tion, com­puter vis­ion and sig­nal pro­cessing. Hav­ing a weekly point of in­ter­ac­tion will provide ex­pos­ure to in­ter­est­ing work in this area and in­crease aware­ness of our shared in­terests and oth­er com­mon­al­it­ies like the use of sim­il­ar com­pu­ta­tion­al tools — think of this as the visu­al com­put­ing edi­tion of the “Know thy neigh­bor” sem­in­ar series.

Who: The tar­get audi­ence are fac­ulty, stu­dents and postdocs in the visu­al com­put­ing dis­cip­lines, but the sem­in­ar is open to any­one and guests are wel­comed. There is no need to form­ally en­roll in a course. The format is very flex­ible and will in­clude 45 minute talks with Q&A, talks by ex­tern­al vis­it­ors, as well as short­er present­a­tions. In par­tic­u­lar, the sem­in­ar is also in­ten­ded as a way for stu­dents to ob­tain feed­back on short­er ~20min talks pre­ced­ing a present­a­tion at a con­fer­ence. If you are a stu­dent or postdoc in one of the visu­al com­put­ing dis­cip­lines, you’ll prob­ably re­ceive email from me soon on schedul­ing a present­a­tion.

Where and when: every Wed­nes­day in BC04 (next to the ground floor at­ri­um).  Food is served at 11:50, and the ac­tu­al talk starts at 12:15.

How to be no­ti­fied: If you want to be kept up to date with an­nounce­ments, please send me an email and I’ll put you on the list. If you are work­ing in LCAV, CVLAB, IVRL, LGG, LSP, IIG, CHILI, LDM or RGL, you are auto­mat­ic­ally sub­scribed to fu­ture an­nounce­ments, so there is noth­ing you need to do.
You may add the sem­in­ar events to Google Cal­en­dar (click the '+' but­ton in the bot­tom-right corner), or down­load the iC­al file.

Schedule

Date Lecturer Contents
18.10.2017 Merlin Nimier-David

Title: Ren­der­ing Spec­u­lar Mi­cro­struc­ture Us­ing Ad­apt­ive Gaus­si­an Pro­cesses

Ab­stract: Many ma­ter­i­als in our world are char­ac­ter­ized by com­plex an­gu­larly de­pend­ent re­flect­ance be­ha­vi­or that is chal­len­ging to re­pro­duce in real­ist­ic light trans­port sim­u­la­tions. In par­tic­u­lar, ma­ter­i­als such as brushed metals, an­od­ized alu­min­um and bumpy plastics are in­ter­est­ing since their sur­faces pro­duce glints, that is, strong re­flec­tions due to very high-fre­quency sur­face nor­mal vari­ations. Al­though meas­ure­ments can be used to cap­ture re­flect­ance val­ues for many com­bin­a­tions of light­ing and ob­ser­va­tion angles, achiev­ing suf­fi­cient pre­ci­sion re­quires large amounts of data, which is then dif­fi­cult to use in a prac­tic­al ren­der­ing scen­ario.

However, the char­ac­ter­ist­ics of the sur­face and its fab­ric­a­tion pro­cess im­ply some stat­ist­ic­al reg­u­lar­ity in the ob­ser­va­tions, which we wish to char­ac­ter­ize and ex­ploit. We pro­pose a new stochast­ic mi­cro­fa­cet mod­el able to cap­ture the com­plex spec­u­lar be­ha­vi­or of rough sur­faces un­der sharp light­ing. We mod­el mi­cros­ur­face de­tail as a Gaus­si­an Pro­cess, which is ef­fi­ciently sampled. Users may spe­cify de­sired sur­face stat­ist­ics through an auto­cor­rel­a­tion func­tion, ex­tend­ing the class of sup­por­ted sur­faces com­pared to pre­vi­ous work. Our mod­el is fully pro­ced­ur­al and thus does not rely on any provided mi­cros­ur­face. Eval­u­ation is ac­cel­er­ated through prun­ing strategies de­rived from the pre­dict­ive dis­tri­bu­tion of slopes. Fi­nally, we provide a proof-of-concept im­ple­ment­a­tion in a ren­der­ing sys­tem to val­id­ate our meth­od's prop­er­ties ex­per­i­ment­ally.

25.10.2017 Tizian Zeltner

Title: Fre­quency-Space Meth­ods for Mod­el­ing Layered Ma­ter­i­als

Ab­stract: Today's ren­der­ing sys­tems use a num­ber of bi­d­irec­tion­al scat­ter­ing dis­tri­bu­tion func­tions (BSD­Fs) rep­res­ent­ing dif­fer­ent ideal­ized ma­ter­i­als such as metals or dielec­trics. These pro­duce phys­ic­ally plaus­ible res­ults, but usu­ally fail to re­pro­duce the com­plex scat­ter­ing be­ha­vi­or present in most real world ma­ter­i­als.

In this talk, we dis­cuss a spe­cial sub­set of ma­ter­i­als that can be de­scribed by stacked lay­ers of ex­ist­ing scat­ter­ing mod­els. Cor­rectly com­bin­ing mod­els is a chal­len­ging prob­lem due to the dif­fi­cult in­tern­al scat­ter­ing that hap­pens between the lay­ers. We present a frame­work for gen­er­at­ing layered BSD­Fs stored in a fre­quency-space rep­res­ent­a­tion that fa­cil­it­ates the adding pro­cess in a pre­com­pu­ta­tion step and ad­di­tion­ally al­lows for ef­fi­cient eval­u­ation and im­port­ance sampling in a ren­der­ing sys­tem. By us­ing lay­ers of com­monly used an­iso­trop­ic mi­cro­fa­cet BSD­Fs as well as scat­ter­ing and ab­sorb­ing me­dia as build­ing blocks, we can sub­stan­tially en­large the space of in­ter­est­ing ma­ter­i­als for ren­der­ing ap­plic­a­tions.

01.11.2017 Peng Song

Title: Com­pu­ta­tion­al Design of Wind-up Toys

Ab­stract: Wind-up toys are mech­an­ic­al as­sem­blies that per­form in­triguing mo­tions driv­en by a simple spring mo­tor. Due to the lim­ited mo­tor force and small body size, wind-up toys of­ten em­ploy high­er pair joints of less fric­tion­al con­tacts and con­nect­or parts of non­trivi­al shapes to trans­fer mo­tions. These unique char­ac­ter­ist­ics make them hard to design and fab­ric­ate as com­pared to oth­er auto­mata. This talk presents a com­pu­ta­tion­al sys­tem to aid the design of wind-up toys, fo­cus­ing on con­struct­ing a com­pact in­tern­al wind-up mech­an­ism to real­ize user-re­ques­ted part mo­tions. We use our sys­tem to design wind-up toys of vari­ous forms, fab­ric­ate a num­ber of them us­ing 3D print­ing, and show the func­tion­al­ity of vari­ous res­ults. 

08.11.2017 Radhakrishna Achanta

Title: A brief his­tory of su­per­pixels

Ab­stract: Su­per­pixels are roughly equally sized clusters of sim­il­ar look­ing pixels. Cre­at­ing su­per­pixels re­duces the num­ber of en­tit­ies in an im­age from a few mil­lion pixels to a few thou­sand clusters of pixels. This in turn re­duces the com­pu­ta­tion­al bur­den of sub­sequent al­gorithms, such as those that use graphs be­cause there are few­er edges to ac­count for. This talk presents three su­per­pixel seg­ment­a­tion al­gorithms. The first one is Simple Lin­ear It­er­at­ive Clus­ter­ing (SLIC), the second is a vari­ant of this called Simple Non-it­er­at­ive Clus­ter­ing (SNIC), and the third is an al­gorithm that re­laxes the re­quire­ment of hav­ing roughly equal size.

15.11.2017 Matthieu Simeoni

Title: Func­tion­al Stochast­ic Max­im­um Like­li­hood for Ar­ray Sig­nal Pro­cessing

Ab­stract: Stochast­ic Max­im­um Like­li­hood (SML) is a pop­u­lar dir­ec­tion of ar­rival (DOA) es­tim­a­tion tech­nique in ar­ray sig­nal pro­cessing. It is a para­met­ric meth­od that jointly es­tim­ates sig­nal and in­stru­ment noise by max­im­um like­li­hood, achiev­ing ex­cel­lent stat­ist­ic­al per­form­ance. Some draw­backs are the com­pu­ta­tion­al over­head as well as the too re­strict­ive point-source data mod­el, which as­sumes few­er sources than sensors. In this work, we pro­pose Func­tion­al Stochast­ic Max­im­um Like­li­hood (FSML). It uses a gen­er­al func­tion­al data mod­el, al­low­ing an un­res­tric­ted num­ber of ar­bit­rar­ily-shaped sources to be re­covered. To this end, we lever­age func­tion­al ana­lys­is tools and ex­press the data in terms of an in­fin­ite-di­men­sion­al sampling op­er­at­or act­ing on a Gaus­si­an ran­dom func­tion. We show that FSML is com­pu­ta­tion­ally more ef­fi­cient than tra­di­tion­al SML, re­si­li­ent to noise, and res­ults in much bet­ter ac­cur­acy than spec­tral-based meth­ods.

22.11.2017 Anastasia Tkach

Title: On­line Gen­er­at­ive Mod­el Per­son­al­iz­a­tion for Hand Track­ing

Ab­stract: We present a new al­gorithm for real-time hand track­ing on com­mod­ity depth sens­ing devices. Our meth­od does not re­quire a user-spe­cif­ic cal­ib­ra­tion ses­sion, but rather learns the geo­metry as the user per­forms live in front of the cam­era, thus en­abling seam­less vir­tu­al in­ter­ac­tion at the con­sumer level. The key nov­elty in our ap­proach is an on­line op­tim­iz­a­tion al­gorithm that jointly es­tim­ates pose and shape in each frame, and de­term­ines the un­cer­tainty in such es­tim­ates. This know­ledge al­lows the al­gorithm to in­teg­rate per-frame es­tim­ates over time, and build a per­son­al­ized geo­met­ric mod­el of the cap­tured user. Our ap­proach can eas­ily be in­teg­rated in state-of-the art con­tinu­ous gen­er­at­ive mo­tion track­ing soft­ware. We provide a de­tailed eval­u­ation that shows how our ap­proach achieves ac­cur­ate mo­tion track­ing for real-time ap­plic­a­tions, while sig­ni­fic­antly sim­pli­fy­ing the work­flow of ac­cur­ate hand per­form­ance cap­ture.

06.12.2017 Kaicheng Yu

Title: Stat­ist­ic­ally-Mo­tiv­ated Second-Or­der Pool­ing

Ab­stract: Second-or­der pool­ing, a.k.a.~bi­lin­ear pool­ing, has proven ef­fect­ive for visu­al re­cog­ni­tion. The re­cent pro­gress in this area has fo­cused on either design­ing nor­mal­iz­a­tion tech­niques for second-or­der mod­els, or com­press­ing the second-or­der rep­res­ent­a­tions. However, these two dir­ec­tions have typ­ic­ally been fol­lowed sep­ar­ately, and without any clear stat­ist­ic­al mo­tiv­a­tion. Here, by con­trast, we in­tro­duce a stat­ist­ic­ally-mo­tiv­ated frame­work that jointly tackles nor­mal­iz­a­tion and com­pres­sion of second-or­der rep­res­ent­a­tions. To this end, we design a para­met­ric vec­tor­iz­a­tion lay­er, which maps a co­v­ari­ance mat­rix, known to fol­low a Wis­hart dis­tri­bu­tion, to a vec­tor whose ele­ments can be shown to fol­low a Chi-square dis­tri­bu­tion. We then pro­pose to make use of a square-root nor­mal­iz­a­tion, which makes the dis­tri­bu­tion of the res­ult­ing rep­res­ent­a­tion con­verge to a Gaus­si­an, thus com­ply­ing with the stand­ard ma­chine learn­ing as­sump­tion. As evid­enced by our ex­per­i­ments, this lets us out­per­form the state-of-the-art second-or­der mod­els on sev­er­al bench­mark re­cog­ni­tion data­sets.

13.12.2017 Mina Konakovic

Title: Com­pu­ta­tion­al Design of Pro­gram­mable Auxet­ic Ma­ter­i­als

Ab­stract: We present a com­pu­ta­tion­al meth­od for design­ing pro­gram­mable auxet­ic ma­ter­i­als, i.e., flat ma­ter­i­als with spa­tially vary­ing phys­ic­al prop­er­ties op­tim­ized to ap­prox­im­ate one tar­get 3D sur­face when ac­tu­ated. A key prop­erty of our ap­proach is that the 3D sur­face is en­coded in the 2D pat­tern. Lever­aging the the­ory of con­form­al geo­metry, we con­trol the max­im­al loc­al scal­ing of the ma­ter­i­al to best match the sur­face de­form­a­tion re­quired to reach the tar­get shape. The tar­get sur­face is then achieved by max­im­ally stretch­ing the ma­ter­i­al every­where. The re­quired de­form­a­tion can be phys­ic­ally real­ized in a simple man­ner without the need of any ex­tern­al guide sur­face, mold or scaf­fold­ing to define the fi­nal shape. In par­tic­u­lar, we demon­strate that we can de­form the fab­ric­ated ma­ter­i­al from its rest con­fig­ur­a­tion to­wards the tar­get by simply ap­ply­ing grav­ity or us­ing an in­fla­tion ap­proach. Fi­nally, we present res­ults that high­light po­ten­tial ap­plic­a­tions in a wide spec­trum of do­mains, ran­ging from small-scale heart stents to large-scale air-sup­por­ted domes.

20.12.2017 Ksenia Konyushkova

Title: Learn­ing Act­ive Learn­ing from Data

Ab­stract: In this work, we sug­gest a nov­el data-driv­en ap­proach to act­ive learn­ing (AL). The key idea is to train a re­gressor that pre­dicts the ex­pec­ted er­ror re­duc­tion for a can­did­ate sample in a par­tic­u­lar learn­ing state. By for­mu­lat­ing the query se­lec­tion pro­ced­ure as a re­gres­sion prob­lem we are not re­stric­ted to work­ing with ex­ist­ing AL heur­ist­ics; in­stead, we learn strategies based on ex­per­i­ence from pre­vi­ous AL out­comes. We show that a strategy can be learnt either from simple syn­thet­ic 2D data­sets or from a sub­set of do­main-spe­cif­ic data. Our meth­od yields strategies that work well on real data from a wide range of do­mains.