We propose and evaluate a neural point-based graphics method that can model semi-transparent scene parts. Similarly to its predecessor pipeline, ours uses point clouds to model proxy geometry, and augments each point with a neural descriptor. Additionally, a learnable transparency value is introduced in our approach for each point.
Our neural rendering procedure consists of two steps. Firstly, the point cloud is rasterized using ray grouping into a multi-channel image. This is followed by the neural rendering step that "translates" the rasterized image into an RGB output using a learnable convolutional network. New scenes can be modeled using gradient-based optimization of neural descriptors and of the rendering network.
We show that novel views of semi-transparent point cloud scenes can be generated after training with our approach. Our experiments demonstrate the benefit of introducing semi-transparency into the neural point-based modeling for a range of scenes with semi-transparent parts.
We leverage synthetic RGBA data to demonstrate the ability of TRANSPR to learn physically-based transparency with 4-channel supervision and compare results with the state-of-the-art methods: NPBG, Neural Volumes, and NeRF.
We show how TRANSPR can model the dynamic behavior of smoke by interpolating descriptors for every second animation frame.
Each animation frame was treated like a separate scene. TRANSPR linearly interpolates the descriptors, and Neural Volumes employes a view-conditioning strategy based on 3 nearest train cameras. The inferred NeRF images for the trained frames were linearly interpolated. The following demo showcases a single frame rendering, as well as the comparison with the ground truth.
We present the comparative performance of TRANSPR on an in-the-wild semi-transparent scene trained using only RGB supervision.
Transparent glass is an extremely challenging surface for photogrammetric reconstruction, so a special scenario with two video sequences was considered for the scene to obtain a complex geometry. First, a 180° sequence of flowers in a transparent vase was captured as is. Afterwards, the vase was wrapped into a paper with checkerboard pattern. Finally, point clouds of each sequence were reconstructed and geometrically aligned.
We demonstrate the possibility of TRANSPR to alter the learned transparency of the objects on Chiffon shirt and Scarf scenes.
We show how TRANSPR extends the scene editing scenario originally proposed in NPBG with added or altered transparency allowing to jointly render synthetic and real-world assets.
@misc{kolos2020transpr, title={TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer}, author={Maria Kolos and Artem Sevastopolsky and Victor Lempitsky}, year={2020}, eprint={2009.02819}, archivePrefix={arXiv}, primaryClass={cs.CV} }