Authors
Xueting Li, Sifei Liu, Jan Kautz, Ming-Hsuan Yang
Publication date
2019
Conference
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Pages
3809-3817
Description
Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image. Recent algorithms based on second-order statistics, however, are either computationally expensive or prone to generate artifacts due to the trade-off between image quality and runtime performance. In this work, we present an approach for universal style transfer that learns the transformation matrix in a data-driven fashion. Our algorithm is efficient yet flexible to transfer different levels of styles with the same auto-encoder network. It also produces stable video style transfer results due to the preservation of the content affinity. In addition, we propose a linear propagation module to enable a feed-forward network for photo-realistic style transfer. We demonstrate the effectiveness of our approach on three tasks: artistic style, photo-realistic and video style transfer, with comparisons to state-of-the-art methods.
Total citations
202020212022202320242442538536
Scholar articles
X Li, S Liu, J Kautz, MH Yang - Proceedings of the IEEE/CVF Conference on Computer …, 2019