|
|
Variational Optical Flow Computation Method Based on Motion Optimization Semantic Segmentation |
GE Liyue1,2, DENG Shixin2, GONG Jie2, ZHANG Congxuan2,3, CHEN Zhen2 |
1. School of Information Engineering, Nanchang Hangkong University, Nanchang 330063; 2. Key Laboratory of Nondestructive Testing, Ministry of Education, Nanchang Hangkong University, Nanchang 330063; 3. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190 |
|
|
Abstract To address the issues of edge-blurring and over-segmentation of image sequence optical flow computation under complex scenes,such as illumination change and large displacement motions, a variational optical flow computation method based on motion optimization semantic segmentation is proposed. Firstly, an energy function of variational optical flow computation is constructed via a image local region based zero-mean normalized cross correlation matching model. Then, the motion boundary information obtained from the computed optical flows is utilized to optimize the initial image semantic segmentation result, and a variational optical flow computation model based on the motion constraint semantic segmentation is constructed. Next, the optical flows of various label areas are fused to acquirethe refined flow field. Finally, experimental results on Middlebury and UCF101 databases demonstrate that the proposed method performs well in computation accuracy and robustness, especially
for the edge preserving with illumination change, textureless regions and large displacement motions.
|
Received: 09 April 2021
|
|
Fund:National Key Research and Development Program of China(No.2020YFC2003800), National Natural Science Foundation of China(No. 61866026,61772255,61866025), China Postdoctoral Science Foundation(No. 2019M650894),Major Program of Natural Science oundation of Jiangxi Province(No. 20202ACB214007), Advantage Technology Innovation Team Project of Jiangxi Province(No. 20165BCB19007), The Outstanding Young Talents Program of Jiangxi Province(No. 20192BCB23011), Aeronautical Science Foundation of China(No. 2018ZC56008) |
Corresponding Authors:
ZHANG Congxuan, Ph.D., associate professor. His research interests include image processing and computer vision.
|
About author:: GE Liyue, master, teaching assistant. His research interests include image processing and computer vision.DENG Shixin, master student. His research interests include image processing and computer vision.GONG Jie, master student. Her research interests include image detection and intelligent recognition.CHEN Zhen, Ph.D., professor. His research interests include image understanding and measurement. |
|
|
|
[1] CHANG J Y, TEJERO-DE-PABLOS A, HARADA T. Improved Optical Flow for Gesture-Based Human-Robot Interaction//Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2019: 7983-7989. [2] XIE H L, CHEN W D, WANG J C, et al. Hierarchical Quadtree Feature Optical Flow Tracking Based Sparse Pose-Graph Visual-Inertial SLAM//Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2020: 58-64. [3] 张炫阁,田彦涛,颜 飞,等.基于全局光流特征的微表情识别.模式识别与人工智能, 2016, 29(8): 760-768. (ZHANG X G, TIAN Y T, YAN F, et al. Micro-expression Recognition Based on Global Optical Flow Feature. Pattern Recognition and Artificial Intelligence, 2016, 29(8): 760-768.) [4] BAMBALAN E P B, BRITANICO C M C, FRANCISCO E L T, et al. Determining Movement of a 2-DOF Motion Chair Using Optical Flow for Realistic Roller Coaster Ride Using 360° Video//Proc of the 4th International Conference on Trends in Electronics and Informatics. Washington, USA: IEEE, 2020: 780-786. [5] DOSOVITSKIY A, FISCHER, IIG E, et al. FlowNet: Learning Optical Flow with Convolutional Networks//Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2015: 2758-2766. [6] IIG E, MAYER N, SAIKIA T, et al. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2017: 1647-1655. [7] SUN D Q, YANG X D, LIU M Y, et al. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume//Proc of the IEEE/CVF International Conference on Computer Vision and Pa-ttern Recognition. Washington, USA: IEEE, 2018: 8934-8943. [8] HOFINGER M, BULÒ S R, PORZI L, et al. Improving Optical Flow on a Pyramid Level//Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2020: 770-786. [9] HUR J, ROTH S. Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation//Proc of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 5747-5756. [10] ZHAO S Y, SHENG Y L, DONG Y, et al. MaskFlownet: Asy-mmetric Feature Matching with Learnable Occlusion Mask//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recog-nition. Washington, USA: IEEE, 2020: 6277-6286. [11] MAYER N, IIG E, FISCHER P, et al. What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation. International Journal of Computer Vision, 2018, 126(9): 942-960. [12] WANG H L, FAN R, LIU M. CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation[C/OL]. [2021-04-04]. https://arxiv.org/pdf/2011.02156.pdf. [13] JONSCHKOWSKI R, STONE A, BARRON J T, et al. What Ma-tters in Unsupervised Optical Flow//Proc of the European Confe-rence on Computer Vision. Berlin, Germany: Springer, 2020: 557-572. [14] LI H P, LUO K M, LIU S C. GyroFlow: Gyroscope-Guided Unsu-pervised Optical Flow Learning[C/OL]. [2021-04-04]. https://arxiv.org/pdf/2103.13725.pdf. [15] 张聪炫,陈 震,黎 明.单目图像序列光流三维重建技术研究综述.电子学报, 2016, 44(12): 3044-3052. (ZHANG C X, CHEN Z, LI M. Review of the 3D Reconstruction Technology Based on Optical Flow of Monocular Image Sequence. Acta Electronica Sinica, 2016, 44(12): 3044-3052.) [16] ZHANG C X, GE L Y, CHEN Z, et al. Refined TV-L1 Optical Flow Estimation Using Joint Filtering. IEEE Transactions on Multimedia, 2020, 22(2): 349-364. [17] MAURER D, STOLL M, BRUHN A. Order-Adaptive and Illumi-nation-Aware Variational Optical Flow Refinement[C/OL]. [2021-04-04]. http://cvis.visus.uni-stuttgart.de/publications/maurer_bmvc2017.pdf. [18] MOHAMED M A, RASHWAN H A, MERTSCHING B, et al. Illumination-Robust Optical Flow Using a Local Directional Pa-ttern. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(9): 1499-1508. [19] XU L, JIA J Y, MATSUSHITA Y. Motion Detail Preserving Optical Flow Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(9): 1744-1757. [20] MONZÓN N, SALGADO A, SÁNCHEZ J. Regularization Strategies for Discontinuity-Preserving Optical Flow Methods. IEEE Transactions on Image Processing, 2016, 25(4): 1580-1591. [21] SUN D Q, ROTH S, BLACK M J. A Quantitative Analysis of Cu-rrent Practices in Optical Flow Estimation and the Principles Behind Them. International Journal of Computer Vision, 2014, 106(2): 115-137. [22] HU Y L, SONG R, LI Y S. Efficient Coarse-to-Fine Patchmatch for Large Displacement Optical Flow//Proc of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2016: 5704-5712. [23] YIN Z C, DARRELL T, YU F. Hierarchical Discrete Distribution Decomposition for Match Density Estimation//Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2019: 6037-6046. [24] BROX T, MALIK J. Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(3): 500-513. [25] CHEN J, CAI Z M, LAI J H, et al. Efficient Segmentation-Based PatchMatch for Large Displacement Optical Flow Estimation. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(12): 3595-3607. [26] MAURER D, MARNIOK N, GOLDLUECKE B, et al. Structure-from-Motion-Aware PatchMatch for Adaptive Optical Flow Estimation//Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 575-592. [27] CHEN Z, ZHANG C X, XIONG F, et al. NRDC-Flow: Large Displacement Flow Field Estimation Using Non-rigid Dense Correspondence. IET Computer Vision, 2020, 14(5): 248-258. [28] HUR J, ROTH S. MirrorFlow: Exploiting Symmetries in Joint Optical Flow and Occlusion Estimation//Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 312-321. [29] SEVILLA-LARA L, SUN D Q, JAMPANI V, et al. Optical Flow with Semantic Segmentation and Localized Layers//Proc of the IEEE/CVF International Conference on Computer Vision and Pa-ttern Recognition. Washington, USA: IEEE, 2016: 3889-3898. [30] MEI L, LAI J H, XIE X H, et al. Illumination-Invariance Optical Flow Estimation Using Weighted Regularization Transform. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 30(2): 495-508. [31] ZHAO F, HUANG Q M, GAO W. Image Matching by Normalized Cross-Correlation//Proc of the IEEE International Conference on Acoustics, Speech and Signal Processing. Washington, USA: IEEE, 2006, II: 729-732. [32] CHEN L C, ZHU Y K, PAPANDREOU G, et al. Encoder-Deco-der with Atrous Separable Convolution for Semantic Image Segmentation//Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2018: 833-851. [33] SUN D Q, WULFF J, SUDDERTH E B, et al. A Fully-Connected Layered Model of Foreground and Background Flow//Proc of the IEEE Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2013: 2451-2458. |
|
|
|