Motion blur can significantly reduce the quality of images, and researchers have developed various algorithms to address this issue. One common approach to deblurring is to use deconvolution to cancel out the blur effect, but this method is limited by the difficulty of accurately estimating blur kernels from blurred images. This is because the motion causing the blur is often complex and nonlinear. In this paper, a new method for estimating blur kernels is proposed. This method uses an event camera, which captures high-temporal-resolution data on pixel luminance changes, along with a conventional camera to capture the input blurred image. By analyzing the event data stream, the proposed method estimates the 2D motion of the blurred image at short intervals during the exposure time, and integrates this information to estimate a variety of complex blur motions. With the estimated blur kernel, the input blurred image can be deblurred using deconvolution. The proposed method does not rely on machine learning and therefore can restore blurry images without depending on the quality and quantity of training data. Experimental results show that the proposed method can estimate blur kernels even for images blurred by complex camera motions, outperforming conventional methods. Overall, this paper presents a promising approach to motion deblurring that could have practical applications in a range of fields.
Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.