RGL
Realistic Graphics Lab
EPFL Logo

Field-Aligned Online Surface Reconstruction

Conditionally accepted to ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017)

Mul­tiple sub­sequently ac­quired 3D scans (top row) are in­ter­act­ively in­teg­rated in a coarse base mesh (bot­tom row). The user sees the fi­nal re­con­struc­ted res­ult at all times, en­riched with tex­tures that en­code col­ors and dis­place­ment (right-most). The user can de­cide to re­con­struct a tri­angle or a quad-dom­in­ant mesh, which has high iso­tropy and reg­u­lar­ity. [Ori­gin­al Sculp­ture Cour­tesy of Mi­chael De­feo]

Abstract

Today's 3D scan­ning pipelines can be clas­si­fied in­to two over­arch­ing cat­egor­ies: off­line, high ac­cur­acy meth­ods that rely on glob­al op­tim­iz­a­tion to re­con­struct com­plex scenes with hun­dreds of mil­lions of samples, and on­line meth­ods that pro­duce real-time but low-qual­ity out­put, usu­ally from struc­ture-from-mo­tion or depth sensors. The meth­od pro­posed in this pa­per is the first to com­bine the be­ne­fits of both ap­proaches, sup­port­ing on­line re­con­struc­tion of scenes with hun­dreds of mil­lions of samples from high-res­ol­u­tion sens­ing mod­al­it­ies such as struc­tured light or laser scan­ners. The key prop­erty of our al­gorithm is that it sidesteps the signed-dis­tance com­pu­ta­tion of clas­sic­al re­con­struc­tion tech­niques in fa­vor of dir­ect fil­ter­ing, para­met­riz­a­tion, and mesh and tex­ture ex­trac­tion. All of these steps can be real­ized us­ing only weak no­tions of spa­tial neigh­bor­hoods, which al­lows for an im­ple­ment­a­tion that scales ap­prox­im­ately lin­early with the size of each data­set that is in­teg­rated in­to a par­tial re­con­struc­tion. Com­bined, these al­gorithmic dif­fer­ences en­able a drastic­ally more ef­fi­cient out­put-driv­en in­ter­act­ive scan­ning and re­con­struc­tion work­flow, where the user is able to see the fi­nal qual­ity field-aligned tex­tured mesh dur­ing the en­tirety of the scan­ning pro­ced­ure. Holes or parts with re­gis­tra­tion prob­lems are dis­played in real-time to the user and can be eas­ily re­solved by adding fur­ther loc­al­ized scans, or by ad­just­ing the in­put point cloud us­ing our in­ter­act­ive edit­ing tools with im­me­di­ate visu­al feed­back on the out­put mesh. We demon­strate the ef­fect­ive­ness of our al­gorithm in con­junc­tion with a state-of-the-art struc­tured light scan­ner and op­tic­al track­ing sys­tem and test it on a large vari­ety of chal­len­ging mod­els.

Text citation

Nico Schertler, Marco Tarini, Wenzel Jakob, Misha Kazhdan, Stefan Gumhold, and Daniele Panozzo. 2017. Field-Aligned Online Surface Reconstruction. In ACM Transactions on Graphics (Proceedings of SIGGRAPH) 36(4).

BibTeX
@article{Schertler2017Field,
    author = {Nico Schertler and Marco Tarini and Wenzel Jakob and Misha Kazhdan and Stefan Gumhold and Daniele Panozzo},
    title = {Field-Aligned Online Surface Reconstruction},
    journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH)},
    volume = {36},
    number = {4},
    year = {2017},
    month = jul,
    doi = {10.1145/3072959.3073635}
}