RGL
EPFL Logo

Visu­al Com­put­ing Sem­in­ar (Spring 2018)

Wednesday
Food @ 11:50am,
Talk @ 12:15am
Tizian Zeltner
Organizer

General information

The Visu­al com­put­ing sem­in­ar is a weekly sem­in­ar series on top­ics in Visu­al Com­put­ing.

Why: The mo­tiv­a­tion for cre­at­ing this sem­in­ar is that EPFL has a crit­ic­al mass of people who are work­ing on subtly re­lated top­ics in com­pu­ta­tion­al pho­to­graphy, com­puter graph­ics, geo­metry pro­cessing, hu­man–com­puter in­ter­ac­tion, com­puter vis­ion and sig­nal pro­cessing. Hav­ing a weekly point of in­ter­ac­tion will provide ex­pos­ure to in­ter­est­ing work in this area and in­crease aware­ness of our shared in­terests and oth­er com­mon­al­it­ies like the use of sim­il­ar com­pu­ta­tion­al tools — think of this as the visu­al com­put­ing edi­tion of the “Know thy neigh­bor” sem­in­ar series.

Who: The tar­get audi­ence are fac­ulty, stu­dents and postdocs in the visu­al com­put­ing dis­cip­lines, but the sem­in­ar is open to any­one and guests are wel­comed. There is no need to form­ally en­roll in a course. The format is very flex­ible and will in­clude 45 minute talks with Q&A, talks by ex­tern­al vis­it­ors, as well as short­er present­a­tions. In par­tic­u­lar, the sem­in­ar is also in­ten­ded as a way for stu­dents to ob­tain feed­back on short­er ~20min talks pre­ced­ing a present­a­tion at a con­fer­ence. If you are a stu­dent or postdoc in one of the visu­al com­put­ing dis­cip­lines, you’ll prob­ably re­ceive email from me soon on schedul­ing a present­a­tion.

Where and when: every Wed­nes­day in BC03 (next to the ground floor at­ri­um).  Food is served at 11:50, and the ac­tu­al talk starts at 12:15.

How to be no­ti­fied: If you want to be kept up to date with an­nounce­ments, please send me an email and I’ll put you on the list. If you are work­ing in LCAV, CVLAB, IVRL, LGG, LSP, IIG, CHILI, LDM or RGL, you are auto­mat­ic­ally sub­scribed to fu­ture an­nounce­ments, so there is noth­ing you need to do.
You may add the sem­in­ar events to Google Cal­en­dar (click the '+' but­ton in the bot­tom-right corner), or down­load the iC­al file.

Schedule

Date Lecturer Contents
21.02.2018 Tizian Zeltner
07.03.2018 Gershon Elber

Title: Sci­ence Art Syn­ergy - the CAD/CAM way

Ab­stract: The art work of M.C. Es­cher needs no in­tro­duc­tion. We have all learned
to ap­pre­ci­ate the im­possib­il­it­ies that this mas­ter of il­lu­sion's
art­work presents to the lay­man's eye. Nev­er­the­less, it may come as a
sur­prise for some, but many of the so-called 'im­possible' draw­ings of
M. C. Es­cher can be real­ized as ac­tu­al phys­ic­al, tan­gible,
three-di­men­sion­al ob­jects. These ob­jects will re­semble the Es­cher's
draw­ings when viewed from a cer­tain view­ing dir­ec­tion.

In this talk, I will dis­cuss sev­er­al artist­ic­ally-re­lated ap­plic­a­tions
where com­puter aided geo­met­ric design can have an im­pact.  In­deed, we
will start by present­ing some in­triguing three-di­men­sion­al so-called
“im­possible mod­els”, fol­low­ing draw­ings and art-work made by Es­cher
and oth­ers, which were de­signed us­ing a vari­ety of geo­met­ric mod­el­ing
and com­puter graph­ics tools and we will por­tray a mod­el­ing pack­age to
syn­thes­ize many of these so called “im­possible mod­els”.  Then, we will
ex­am­ine oth­er ap­plic­a­tions, in­clud­ing glass laser etch­ing and the
mak­ings of tan­gible-mo­sa­ics.

P.S. This talk im­poses no pre­requis­ites bey­ond an open mind.

14.03.2018 Agata Mosinska

Title: Bey­ond the Pixel-Wise Loss for To­po­logy-Aware De­lin­eation

Ab­stract: De­lin­eation of cur­vi­lin­ear struc­tures is an im­port­ant prob­lem in Com­puter Vis­ion with mul­tiple prac­tic­al ap­plic­a­tions. With the ad­vent of Deep Learn­ing, many cur­rent ap­proaches on auto­mat­ic de­lin­eation have fo­cused on find­ing more power­ful deep ar­chi­tec­tures, but have con­tin­ued us­ing the ha­bitu­al pixel-wise losses such as bin­ary cross-en­tropy. In this pa­per we claim that pixel-wise losses alone are un­suit­able for this prob­lem be­cause of their in­ab­il­ity to re­flect the to­po­lo­gic­al im­pact of mis­takes in the fi­nal pre­dic­tion. We pro­pose a new loss term that is aware of the high­er-or­der to­po­lo­gic­al fea­tures of lin­ear struc­tures. We also in­tro­duce a re­fine­ment pipeline that it­er­at­ively ap­plies the same mod­el over the pre­vi­ous de­lin­eation to re­fine the pre­dic­tions at each step while keep­ing the num­ber of para­met­ers and the com­plex­ity of the mod­el con­stant. 

When com­bined with the stand­ard pixel-wise loss, both our new loss term and our it­er­at­ive re­fine­ment boost the qual­ity of the pre­dicted de­lin­eations, in some cases al­most doub­ling the ac­cur­acy as com­pared to the same clas­si­fi­er trained with the bin­ary cross-en­tropy alone. We show that our ap­proach out­per­forms state-of-the-art meth­ods on a wide range of data, from mi­cro­scopy to aer­i­al im­ages.

21.03.2018 Adam Scholefield

Title: New tools for loc­al­isa­tion and map­ping

Ab­stract: User po­s­i­tion­ing, self-driv­ing cars and oth­er autonom­ous ro­bots are all ex­amples of the ubi­quity of loc­al­isa­tion and map­ping. It is well stud­ied in the forms of an­gu­la­tion and lat­er­a­tion—for known land­mark po­s­i­tions—and, more gen­er­ally, sim­ul­tan­eous loc­al­isa­tion and map­ping (SLAM) and struc­ture from mo­tion (SfM)—for un­known land­mark po­s­i­tions. In this talk, I will re­view ex­ist­ing ap­proaches and dis­cuss some nov­el pri­ors for these prob­lems. In par­tic­u­lar, I will place an em­phas­is on geo­met­ric­al tech­niques us­ing Eu­c­lidean dis­tance matrices (EDM)s and dis­cuss ex­ten­sions to bet­ter fit prac­tic­al ap­plic­a­tions. In ad­di­tion, I will at­tempt to ab­stract the key es­sence of loc­al­isa­tion and map­ping prob­lems, which will al­low us to de­rive fun­da­ment­al per­form­ance lim­its.

28.03.2018 Gerard Pons-Moll

Title: Re­con­struct­ing and Per­ceiv­ing Hu­mans in Mo­tion

Ab­stract:  For man-ma­chine in­ter­ac­tion it is cru­cial to de­vel­op mod­els of hu­mans that look and move in­dis­tin­guish­ably from real hu­mans. Such vir­tu­al hu­mans will be key for ap­plic­a­tion areas such as com­puter vis­ion, medi­cine and psy­cho­logy, vir­tu­al and aug­men­ted real­ity and spe­cial ef­fects in movies. 

Cur­rently, di­git­al mod­els typ­ic­ally lack real­ist­ic soft tis­sue and cloth­ing or re­quire time-con­sum­ing manu­al edit­ing of phys­ic­al sim­u­la­tion para­met­ers. Our hy­po­thes­is is that bet­ter and more real­ist­ic mod­els of hu­mans and cloth­ing can be learned dir­ectly by cap­tur­ing real people us­ing 4D scans, im­ages, and depth and in­er­tial sensors. Com­bin­ing stat­ist­ic­al ma­chine learn­ing tech­niques and geo­met­ric op­tim­iz­a­tion, we cre­ate real­ist­ic mod­els from the cap­tured data. 

We then lever­age the learned di­git­al mod­els to ex­tract in­form­a­tion out of in­com­plete and noisy sensor data com­ing from mon­ocu­lar video, depth or a small num­ber of IMUs. 

I will give an over­view of a se­lec­tion of pro­jects where the goal is to build real­ist­ic mod­els of hu­man pose, shape, soft-tis­sue and cloth­ing. I will also present some of our re­cent work on 3D re­con­struc­tion of people mod­els from mon­ocu­lar video, and real-time joint re­con­struc­tion of sur­face geo­metry and hu­man body shape from depth data. I will con­clude the talk out­lining the next chal­lenges in build­ing di­git­al hu­mans and per­ceiv­ing them from sens­ory data. 

Bio:  Ger­ard Pons-Moll ob­tained his de­gree in su­per­i­or Tele­com­mu­nic­a­tions En­gin­eer­ing from the Tech­nic­al Uni­versity of Cata­lonia (UPC) in 2008. From 2007 to 2008 he was at North­east­ern Uni­versity in Bo­ston USA with a fel­low­ship from the Voda­fone found­a­tion con­duct­ing re­search on med­ic­al im­age ana­lys­is. He re­ceived his Ph.D. de­gree (with dis­tinc­tion) from the Leib­n­iz Uni­versity of Han­nov­er in 2014. In 2012 he was a vis­it­ing re­search­er at the vis­ion group at the Uni­versity of Toronto. In 2012 he also worked as in­tern at the com­puter vis­ion group at Mi­crosoft Re­search Cam­bridge. From 11/2013 un­til 09/2015 he was a postdoc and later from 10/2015-08/2017 Re­search Sci­ent­ist at Per­ceiv­ing Sys­tems, Max Planck for In­tel­li­gent Sys­tems. Since 09/2017 he is head­ing the group Real Vir­tu­al Hu­mans at the Max Planck In­sti­tute for In­form­at­ics.

His work has been pub­lished at the ma­jor com­puter vis­ion and com­puter graph­ics con­fer­ences and journ­als in­clud­ing Sig­graph, Sig­graph Asia, CVPR, ICCV, BMVC(Best Pa­per), Euro­graph­ics(Best Pa­per), IJCV and TPAMI. He serves reg­u­larly as a re­view­er for TPAMI, IJCV, Sig­graph, Sig­graph Asia, CVPR, ICCV, ECCV, ACCV, SCA, ICML and oth­ers. He co-or­gan­ized 1 work­shop and 3 tu­tori­als: 1 tu­tori­al at ICCV 2011 on Look­ing at People: Mod­el Based Pose Es­tim­a­tion, and 2 tu­tori­als at ICCV 2015 and Sig­graph 2016 on Mod­el­ing Hu­man Bod­ies in Mo­tion, and the work­shop People­Cap at ICCV'17.


His re­search in­terests are 3D mod­el­ing of hu­mans and cloth­ing in mo­tion and us­ing ma­chine learn­ing and graph­ics mod­els to solve vis­ion prob­lems.

04.04.2018

East­er break

11.04.2018 Merlin Nimier-David

Title: Co­her­ent Hamilto­ni­an Monte Carlo for Light Trans­port

Ab­stract: Ren­der­ing real­ist­ic im­ages in­volves es­tim­at­ing a dif­fi­cult and high-di­men­sion­al in­teg­ral. Most clas­sic­al al­gorithms ap­prox­im­ate this in­teg­ral with a Monte Carlo es­tim­ate over all paths that light can take with­in a giv­en scene (Path Space). Ob­tain­ing a con­verged, noise-free im­age thus re­quires large amounts of com­put­ing power.

Mod­ern CPUs have vec­tor­iz­a­tion units which al­low car­ry­ing out op­er­a­tions sim­ul­tan­eously on 8, 16 or more op­er­ands at once. Un­for­tu­nately, cur­rent state-of-the-art ren­der­ing al­gorithms fea­ture highly in­co­her­ent work­loads that barely be­ne­fit from vec­tor­iz­a­tion. We will give in­tu­ition about what “co­her­ence” means in this con­text, and why it is per­form­ance crit­ic­al.
 
We present Co­her­ent Hamilto­ni­an Monte Carlo (CHMC), a ren­der­ing al­gorithm de­signed to gen­er­ate co­her­ent work­loads, which are well suited to mod­ern CPU ar­chi­tec­tures. Path space is ex­plored through bundles of sim­il­ar rays, ef­fect­ively com­put­ing a blurred ver­sion of the tar­get func­tion. We will see that this for­mu­la­tion not only im­proves co­her­ence, but also gives us ac­cess to ro­bust gradi­ent es­tim­ates, en­abling prac­tic­al us­age of ad­vanced MCMC schemes such as Hamilto­ni­an Monte Carlo.

The talk will in­clude a re­view of the rel­ev­ant back­ground, in­clud­ing vec­tor in­struc­tions, Phys­ic­ally Based Ren­der­ing, MCMC, and Hamilto­ni­an Monte Carlo.

18.04.2018 Peng Song

Title: An In­ter­lock­ing Meth­od for 3D As­sembly Design and Fab­ric­a­tion

Ab­stract: 3D as­sem­blies such as fur­niture in­volve mul­tiple com­pon­ent parts. Rather than re­ly­ing on ad­di­tion­al fasten­ers such as nails and screws to con­nect the parts, com­pon­ent parts can be in­ter­locked in­to a steady as­sembly based on their own geo­met­ric ar­range­ments. This talk re­vis­its the no­tion of 3D in­ter­lock­ing, and ex­plores the gov­ern­ing mech­an­ics of gen­er­al 3D in­ter­lock­ing as­sem­blies. From this, con­struct­ive ap­proaches are de­veloped for com­pu­ta­tion­al design of vari­ous new in­ter­lock­ing as­sem­blies such as puzzles, 3D prin­ted ob­jects and fur­niture. These in­ter­lock­ing as­sem­blies are ready for fab­ric­a­tion and their stead­i­ness have been val­id­ated in our ex­per­i­ments.

25.04.2018 Hanjie Pan

Title: Look­ing bey­ond pixels: the­ory, al­gorithms and ap­plic­a­tions of con­tinu­ous sparse re­cov­ery

Ab­stract: Sparse re­cov­ery is a power­ful tool that plays a cent­ral role in many ap­plic­a­tions. Con­ven­tion­al ap­proaches usu­ally re­sort to dis­cret­iz­a­tion, where the sparse sig­nals are es­tim­ated on a pre-defined grid. However, the sparse sig­nals do not line up con­veni­ently on any grid in real­ity.

We pro­pose a con­tinu­ous-do­main sparse re­cov­ery tech­nique by gen­er­al­iz­ing the fi­nite rate of in­nov­a­tion (FRI) sampling frame­work to cases with non-uni­form meas­ure­ments. We achieve this by identi­fy­ing a set of un­known uni­form si­nus­oid­al samples (which are re­lated to the sparse sig­nal para­met­ers to be es­tim­ated) and the lin­ear trans­form­a­tion that links the uni­form samples of si­nus­oids to the meas­ure­ments. It is shown that the con­tinu­ous-do­main sparsity con­straint can be equi­val­ently en­forced with a dis­crete con­vo­lu­tion equa­tion of these si­nus­oid­al samples. Then, the sparse sig­nal is re­con­struc­ted by min­im­iz­ing the fit­ting er­ror between the giv­en and the re-syn­thes­ized meas­ure­ments (based on the es­tim­ated sparse sig­nal para­met­ers) sub­ject to the sparsity con­straint. Fur­ther, we de­vel­op a multi-di­men­sion­al sampling frame­work for Dir­acs in two or high­er di­men­sions with lin­ear sample com­plex­ity. This is a sig­ni­fic­ant im­prove­ment over pre­vi­ous meth­ods, which have a com­plex­ity that in­creases ex­po­nen­tially with space di­men­sion. An ef­fi­cient al­gorithm has been pro­posed to find a val­id solu­tion to the con­tinu­ous-do­main sparse re­cov­ery prob­lem such that the re­con­struc­tion (i) sat­is­fies the sparsity con­straint; and (ii) fits the giv­en meas­ure­ments (up to the noise level).

We val­id­ate the flex­ib­il­ity and ro­bust­ness of the FRI-based con­tinu­ous-do­main sparse re­cov­ery in both sim­u­la­tions and ex­per­i­ments with real data in ra­dioastro­nomy, acous­tics and mi­cro­scopy.

02.05.2018 Delio Vicini

Title: Learn­ing Light-Trans­port us­ing Vari­ation­al Auto En­coders

Ab­stract: In the real world, many ob­jects are made out of ma­ter­i­als which al­low some light to be trans­mit­ted and scattered in­tern­ally. This ef­fect is called sub­sur­face scat­ter­ing and can have a large im­pact on the visu­al ap­pear­ance of an ob­ject. Hu­man skin, milk, marble and many oth­er ma­ter­i­als ex­hib­it sub­sur­face scat­ter­ing. When ren­der­ing vir­tu­al scenes con­tain­ing such ob­jects, we have to sim­u­late these in­tern­al light bounces, which is an ex­pens­ive pro­cess.

In the past, dif­fer­ent al­gorithms have been pro­posed to ap­prox­im­ate in­tern­al scat­ter­ing by sim­pler, ana­lyt­ic­al mod­els. However, these ap­prox­im­a­tions are in­ac­cur­ate in many scen­ari­os, since they of­ten do not ac­count for the scene geo­metry.

In this on­go­ing pro­ject, we try to ac­cel­er­ate ren­der­ing of sub­sur­face scat­ter­ing us­ing ma­chine learn­ing. In­stead of ex­haust­ively sim­u­lat­ing in­tern­al light scat­ter­ing, we use a Vari­ation­al Auto-En­coder (VAE) to es­tim­ate light trans­port more ef­fi­ciently. Con­di­tioned on the geo­metry of the scene, the VAE pro­duces samples from the dis­tri­bu­tion of light scattered in­side an ob­ject. Our meth­od is read­ily em­bed­ded in con­ven­tion­al ren­der­ing sys­tems.

In this talk, we present the ne­ces­sary back­ground in sub­sur­face scat­ter­ing ren­der­ing as well as vari­ation­al auto-en­coders, and no pri­or know­ledge in either top­ic is as­sumed.

09.05.2018 Helge Rhodin

Title: Rep­res­ent­a­tion Learn­ing for semi-su­per­vised 3D Hu­man Pose Es­tim­a­tion

Ab­stract: Mod­ern 3D hu­man pose es­tim­a­tion tech­niques rely on deep net­works, which re­quire large amounts of train­ing data. I'll present two semi-su­per­vised ap­proaches to over­come this prob­lem.
First, by con­strain­ing 3D pose to not only match the GT on labeled train­ing sets but also to be view-con­sist­ent on ad­di­tion­al un­labeled multi-view ex­amples.
Second, by learn­ing a low-di­men­sion­al geo­metry-aware body rep­res­ent­a­tion without any 3D pose an­nota­tions. Be­cause this rep­res­ent­a­tion en­codes 3D geo­metry ex­pli­citly, us­ing it in a semi-su­per­vised set­ting makes it easi­er to learn a map­ping from it to 3D hu­man pose.
These meth­ods are par­tic­u­larly in­ter­est­ing for do­mains where pose la­bels are hard to ob­tain. For in­stance alpine ski­ing, which can't be per­formed and cap­tured in lab con­di­tions.

16.05.2018 Julian Panetta

Title: Ro­bust Elast­ic Metama­ter­i­al Design for Ad­dit­ive Fab­ric­a­tion

Ab­stract: 3D print­ing can ef­fi­ciently man­u­fac­ture geo­metry of ar­bit­rary com­plex­ity but of­fers only lim­ited con­trol over the elast­ic ma­ter­i­al prop­er­ties of the prin­ted part. In this talk, I will present my work on design­ing elast­ic metama­ter­i­als for 3D print­ing: peri­od­ic mi­cro­struc­tures that are tuned to emu­late a large space of elast­ic ma­ter­i­als. Since mi­cro­struc­tures typ­ic­ally con­tain thin fea­tures that con­cen­trate stress, they are prone to plastic de­form­a­tion and frac­ture even un­der mild de­form­a­tions. A key goal of my work is to design mi­cro­struc­tures that min­im­ize these stress con­cen­tra­tions, im­prov­ing the metama­ter­i­als’ ro­bust­ness in prac­tice. The design al­gorithm I de­veloped is based on a ef­fi­cient, ex­act solu­tion to the worst-case stress ana­lys­is prob­lem for peri­od­ic mi­cro­struc­tures which sup­ports sev­er­al fail­ure cri­ter­ia (e.g., max­im­um prin­cip­al stress or von Mises stress); it pro­duces mi­cro­struc­tures that min­im­ize worst-case stresses while achiev­ing a par­tic­u­lar tar­get elasti­city tensor and sat­is­fy­ing fab­ric­a­tion con­straints. I also will present a design tool to op­tim­ally ap­ply these metama­ter­i­als to achieve high-level de­form­a­tion goals. 

23.05.2018 Tizian Zeltner

Title: The Lay­er Labor­at­ory: A Cal­cu­lus for Ad­dit­ive and Sub­tract­ive Com­pos­i­tion of An­iso­trop­ic Sur­face Re­flect­ance

Ab­stract: We present a ver­sat­ile com­pu­ta­tion­al frame­work for mod­el­ing the re­flect­ive and trans­missive prop­er­ties of ar­bit­rar­ily layered an­iso­trop­ic ma­ter­i­al struc­tures. Giv­en a set of in­put lay­ers, our mod­el syn­thes­izes an ef­fect­ive BSDF (bi­d­irec­tion­al scat­ter­ing dis­tri­bu­tion func­tion) of the en­tire struc­ture, which ac­counts for all or­ders of in­tern­al scat­ter­ing and is ef­fi­cient to sample and eval­u­ate in mod­ern ren­der­ing sys­tems.
Our tech­nique builds on the in­sight that re­flect­ance data is sparse when ex­pan­ded in­to a suit­able fre­quency-space rep­res­ent­a­tion, and that this prop­erty ex­tends to the class of an­iso­trop­ic ma­ter­i­als. This sparsity en­ables an ef­fi­cient mat­rix cal­cu­lus that ad­mits the en­tire space of BSD­Fs and con­sid­er­ably ex­pands the scope of pri­or work on layered ma­ter­i­al mod­el­ing.
In ad­di­tion to ad­dit­ive com­pos­i­tion, our mod­els sup­ports sub­tract­ive com­pos­i­tion, a fas­cin­at­ing new op­er­a­tion that re­con­structs the BSDF of a ma­ter­i­al that can only be ob­served in­dir­ectly through an­oth­er lay­er with known re­flect­ance prop­er­ties. The op­er­a­tion pro­duces a new BSDF of the de­sired lay­er as if meas­ured in isol­a­tion.
We ex­per­i­ment­ally demon­strate the ac­cur­acy and scope of our mod­el and val­id­ate both ad­dit­ive and sub­tract­ive com­pos­i­tion us­ing meas­ure­ments of real-world layered ma­ter­i­als.

30.05.2018 Siavash Bigdeli

Title: Learn­ing to Mean-Shift in O(1) for Bayesian Im­age Res­tor­a­tion

Ab­stract: Find­ing strong or­acle pri­ors is an im­port­ant top­ic in im­age res­tor­a­tion. In this talk, I will show how de­nois­ing au­toen­coders (DAEs) learn to mean-shift in O(1), and how we lever­age this to em­ploy DAEs as gen­er­ic pri­ors for im­age res­tor­a­tion. I will also dis­cuss the case of Gaus­si­an DAEs in a Bayesian frame­work, where the de­grad­a­tion noise and/or blur ker­nel are un­known. Ex­per­i­ment­al res­ults demon­strate state of the art per­form­ance of the pro­posed DAE pri­ors.