User-Guided Deep Human Image Matting using Arbitrary Trimaps

by Xiaonan Fang · Song-Hai Zhang · Tao Chen · Xian Wu · Ariel Shamir · Shi-Min Hu

The Paper (IEEE Explore)

Abstract

Image matting is widely studied for accurate foreground extraction. Most algorithms, including deep-learning based solutions, require a carefully edited trimap. Recent works attempt to combine the segmentation stage and matting stage in one CNN model, but errors occurring at the segmentation stage lead to unsatisfactory matte. We propose a user-guided approach for practical human matting. More precisely, we provide a good automatic initial matting and a natural way of interaction that reduces the workload of drawing trimaps and allows users to guide the matting in ambiguous situation. We also combine the segmentation and matting stage in an end-to-end CNN architecture and introduce a residual-learning module to support convenient stroke-based interaction. The proposed model learns to propagate the input trimap and modify the deep image features, which can efficiently correct the segmentation errors. Our model supports arbitrary forms of trimaps from carefully edited to totally unknown maps. Our model also allows users to choose from different foreground estimations according to their preference.We collected a large human matting dataset consisting of 12K real-world human images with complex background and human-object relations. The proposed model is trained on the new dataset with a novel trimap generation strategy that enables the model to tackle different test situations and highly improves the interaction efficiency. Our method outperforms other stateof- the-art automatic methods and achieve competitive accuracy when high-quality trimaps are provided. Experiments indicate that our interactive matting strategy is superior to separately estimating the trimap and alpha matte using two models.