Authors
Ayan Sinha, Asim Unmesh, Qixing Huang, Karthik Ramani
Publication date
2017
Conference
Proceedings of the IEEE conference on computer vision and pattern recognition
Pages
6040-6049
Description
3D shape models are naturally parameterized using vertices and faces, ie, composed on polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistentgeometry images' representing the 3D shape surface of a category of shapes. We then use this consistent representation for category-specific shape generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of 3D surface generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces, reconstruct 3D shape surfaces from previously unseen images, and rectify noisy correspondence between 3D shapes belonging to the same class.
Total citations
20172018201920202021202220232024334384731152412
Scholar articles
A Sinha, A Unmesh, Q Huang, K Ramani - Proceedings of the IEEE conference on computer …, 2017
C Li, H Liu, Q Jin - Proceedings-2016 8th International Conference on …, 2017