Background Extraction Based on Joint Gaussian Conditional Random Fields
  Hong-Cyuan Wang     Yu-Chi Lai     Wen-Huang Cheng     Chin-Yun Cheng     Kai-Lung Hua  
Wang, H.-C., Lai, Y.-C., Cheng, W.-H., Cheng, C.-Y., and Hua, K.-L., “Background Extraction Based on Joint Gaussian Conditional Random Fields.”, IEEE Transaction on Circuits and Systems on Video Technology, accepted, 2017 Supplemental material
Abstract

Background extraction is generally the first step for many computer vision and augmented reality applications. Most existing methods, which assume a clean background shot, are not suitable for video sequences such as highway traffic videos containing complex foreground movement which are impossible for having a clean background shot. Therefore, a novel joint Gaussian conditional random field (JGCRF) background extraction method is introduced to estimate optimal composition frame weights for a fixed-view video sequence as a Maximum a posteriori problem to describe intra-frame and inter-frame relationship among all frame pixels by considering their spatial and temporal coherence and contrast distinctness. Since all background objects and elements are assumed to be static, those static and motion-less patches inside the video sequences can be good background indication for construction of artifact-free results. Therefore, a motion-less extractor is designed using temporal difference between two consecutive frames and accumulation of temporal frame variation to identify moving foreground and motion-less background candidates. The JGCRF framework is flexible to take the motion-less patches as extra observable random variable linking with desired fusion weights to constrain the solving process for more consistent and robust background separation. Quantitatively and qualitatively experimental results demonstrate the effectiveness and robustness of our approach over several state-of-art algorithms with less artifacts and lower computational cost.

Bibtex

XXX

Acknowledgement

This work was supported by the National Science Council of Taiwan under Grants MOST104-2221-E-011-091- MY2, MOST104-2221-E-011-029-MY3, and MOST105- 2228-E-011-005.

PDF