Authors
Liyuan Pan, Cedric Scheerlinck, Xin Yu, Richard Hartley, Miaomiao Liu, Yuchao Dai
Publication date
2018/11/26
Journal
IEEE Computer Vision and Pattern Recognition 2019 arXiv preprint arXiv:1811.10180
Description
Event-based cameras can measure intensity changes (called'events') with microsecond accuracy under high-speed motion and challenging lighting conditions. With the active pixel sensor (APS), the event camera allows simultaneous output of the intensity frames. However, the output images are captured at a relatively low frame-rate and often suffer from motion blur. A blurry image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the latent images. Therefore, we are able to model the blur-generation process by associating event data to a latent image. In this paper, we propose a simple and effective approach, the Event-based Double Integral (EDI) model, to reconstruct a high frame-rate, sharp video from a single blurry frame and its event data. The video generation is based on solving a simple non-convex optimization problem in a single scalar variable. Experimental results on both synthetic and real images demonstrate the superiority of our EDI model and optimization method in comparison to the state-of-the-art.
Total citations
2018201920202021202220232024142345417257
Scholar articles
L Pan, C Scheerlinck, X Yu, R Hartley, M Liu, Y Dai - Proceedings of the IEEE/CVF Conference on Computer …, 2019