From a sequence of frames that view a car in passing, our model simultaneously learns the parameters of a symmetry transformation from the data and applies the symmetry as a soft constraint to reconstruct the model, despite the significantly different view densities between the seen and unseen sides. The learned symmetry allows SNeS to share information across the model, resulting in more accurate reconstructions and higher-fidelity novel synthesised views.
We present a method for the accurate 3D reconstruction of partly-symmetric objects. We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF). A major shortcoming of such approaches is that they fail to reconstruct any part of the object which is not clearly visible in the training image, which is often the case for in-the-wild images and videos. When evidence is lacking, structural priors such as symmetry can be used to complete the missing information. However, exploiting such priors in neural rendering is highly non-trivial: while geometry and non-reflective materials may be symmetric, shadows and reflections from the ambient scene are not symmetric in general. To address this, we apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity. We evaluate our method on the recently introduced CO3D dataset, focusing on the car category due to the challenge of reconstructing highly-reflective materials. We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
The Symmetric Neural Surfaces (SNeS) model. For an input 3D point xi and direction vector d, the model estimates the geometry with an SDF network that generates a signed distance δi, a normal vector ni, and a feature vector fi. The first two are used to compute the opacity αi, which assigns high opacity to points near surfaces. The feature vector is passed to the appearance networks to compute the material properties of albedo colour cia and reflectivity γir, and the lighting properties of diffuse shading γid and specular colour cis. Lastly, the Phong model is used to compute the colour of the 3D point, and each sample along the ray is combined to render the pixel with colour ĉ. The subscript s indicates whether the geometry, material and lighting components were computed with inputs that had undergone a symmetry transformation (1) or not (0), denoted by the triangular symbol. In each case, the lighting networks take different parameters θ, since lighting is typically asymmetric.
Eldar Insafutdinov, Dylan Campbell, João F. Henriques, Andrea Vedaldi
In ECCV, 2022.
@InProceedings{insafutdinov2022snes,
title = {SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data},
author = {Insafutdinov, Eldar and Campbell, Dylan and Henriques, Jo{\~a}o F and Vedaldi, Andrea},
booktitle = {ECCV},
year = {2022},
}
We thank the authors of NeuS whose implementation was used as a starting point in this work.
Eldar and Dylan are grateful for support from Continental AG and the European Research Council Starting Grant (IDIU 638009), and João for support from the Royal Academy of Engineering (RF\201819\18\163).
The source code for the template of this webpage can be found here.