Dataset Open Access

RGB-D Mirror Segmentation Dataset

Haiyang Mei, Bo Dong, Wen Dong, Pieter Peers, Xin Yang, Qiang Zhang, Xiaopeng Wei.

Depth-Aware Mirror Segmentation.

The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021 (Oral).

https://openaccess.thecvf.com/content/CVPR2021/papers/Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_paper.pdf

Mei_Depth-Aware_Mirror_Segmentation_CVPR_2021_paper.pdf

Please cite this paper when using the data: BibTex.txt


Introduction:

We present a novel mirror segmentation method that leverages depth estimates from ToF-based cameras as an additi-

onal cue to disambiguate challenging cases where the contrast or relation in RGB colors between the mirror re-flection

and the surrounding scene is subtle .  A key observation is  that ToF depth estimates do not report the true depth of the

mirror surface ,but instead return the total length ofthe reflected light paths,thereby creating obvious depth dis-continui-

ties at the mirror boundaries.To exploit depth information in mirror segmentation,we first construct a large-scale RGB-D

mirror segmentation dataset , which we subse-quently employ to train a novel depth-aware mirror segmentation frame-

work.Our mirror segmentation framework first locates the mirrors based on color and depth discontinuities and correla-

lations. Next , our model further refines the mirror boundaries through contextual contrast taking into accountboth color

and depth information.We extensively validate our depth-aware mirror segmentation method and demonstrate that our

model outperforms state-of-the-art RGB and RGB-D based methods for mirror segmentation.Experimental results also

show that depth is a powerful cue for mirror segmentation.



mirror.gif



Please enter your application information for dataset