Authors
Yufei Ye, Xueting Li, Abhinav Gupta, Shalini De Mello, Stan Birchfield, Jiaming Song, Shubham Tulsiani, Sifei Liu
Publication date
2023
Conference
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Pages
22479-22489
Description
Recent successes in image synthesis are powered by large-scale diffusion models. However, most methods are currently limited to either text-or image-conditioned generation for synthesizing an entire image, texture transfer or inserting objects into a user-specified region. In contrast, in this work we focus on synthesizing complex interactions (ie, an articulated hand) with a given object. Given an RGB image of an object, we aim to hallucinate plausible images of a human hand interacting with it. We propose a two step generative approach that leverages a LayoutNet that samples an articulation-agnostic hand-object-interaction layout, and a ContentNet that synthesizes images of a hand grasping the object given the predicted layout. Both are built on top of a large-scale pretrained diffusion model to make use of its latent representation. Compared to baselines, the proposed method is shown to generalize better to novel objects and perform surprisingly well on out-of-distribution in-the-wild scenes. The resulting system allows us to predict descriptive affordance information, such as hand articulation and approaching orientation.
Total citations
Scholar articles
Y Ye, X Li, A Gupta, S De Mello, S Birchfield, J Song… - Proceedings of the IEEE/CVF Conference on Computer …, 2023