Skip to Main content Skip to Navigation
Journal articles

Learning contextual superpixel similarity for consistent image segmentation

Abstract : This paper addresses the problem of image segmentation by iterative region aggregations starting from an initial superpixel decomposition. Classical approaches for this task com- pute superpixel similarity using distance measures between superpixel descriptor vectors. This usually poses the well-known problem of the semantic gap and fails to properly aggre- gate visually non-homogeneous superpixels that belong to the same high-level object. This work proposes to use random forests to learn the merging probability between adjacent superpixels in order to overcome the aforementioned issues. Compared to existing works, this approach learns the fusion rules without explicit similarity measure computation. We also introduce a new superpixel context descriptor to strengthen the learned characteris- tics towards better similarity prediction. Image segmentation is then achieved by iteratively merging the most similar superpixel pairs selected using a similarity weighting objective function. Experimental results of our approach on four datasets including DAVIS 2017 and ISIC 2018 show its potential compared to state-of-the-art approaches.
Complete list of metadatas
Contributor : Pierre-Henri Conze <>
Submitted on : Tuesday, October 8, 2019 - 2:14:19 PM
Last modification on : Tuesday, October 20, 2020 - 6:48:06 PM



Mahaman Sani Chaibou Salaou, Pierre-Henri Conze, Karim Kalti, Mohamed Ali Mahjoub, Basel Solaiman. Learning contextual superpixel similarity for consistent image segmentation. Multimedia Tools and Applications, Springer Verlag, 2019, ⟨10.1007/s11042-019-08391-6⟩. ⟨hal-02308289⟩



Record views