Authors
Jiaying Liu, Wenhan Yang, Shuai Yang, Zongming Guo
Publication date
2018/6
Conference
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Pages
3233-3242
Description
In this paper, we address the problem of video rain removal by constructing deep recurrent convolutional networks. We visit the rain removal case by considering rain occlusion regions, ie light transmittance of rain streaks is low. Different from additive rain streaks, in such rain occlusion regions, the details of background images are completely lost. Therefore, we propose a hybrid rain model to depict both rain streaks and occlusions. With the wealth of temporal redundancy, we build a Joint Recurrent Rain Removal and Reconstruction Network (J4R-Net) that seamlessly integrates rain degradation classification, spatial texture appearances based rain removal and temporal coherence based background details reconstruction. The rain degradation classification provides a binary map that reveals whether a location degraded by linear additive streaks or occlusions. With this side information, the gate of the recurrent unit learns to make a trade-off between rain streak removal and background details reconstruction. Extensive experiments on a series of synthetic and real videos with rain streaks verify the superiority of the proposed method over previous state-of-the-art methods.
Total citations
20182019202020212022202320249213848393819
Scholar articles
J Liu, W Yang, S Yang, Z Guo - Proceedings of the IEEE conference on computer …, 2018