Authors
Paulis BARZDINS, Ingus PRETKALNINS, Guntis Barzdins
Publication date
2024/1/1
Journal
Baltic Journal of Modern Computing
Volume
12
Issue
1
Description
This paper presents a novel approach to open-set semantic segmentation in unstructured environments where there are no meaningful prior mask proposals. Our method leverages pretrained encoders from foundation models and uses image-caption datasets for training, reducing the need for annotated masks and extensive computational resources. We introduce a novel contrastive loss function, named CLIC (Contrastive Loss function on Image-Caption data), which enables training a semantic segmentation model directly on an image-caption dataset. By utilising image-caption datasets, our method provides a practical solution for semantic segmentation in scenarios where large-scale segmented mask datasets are not readily available, as is the case for unstructured environments where full segmentation is unfeasible. Our approach is adaptable to evolving foundation models, as the encoders are used as black-boxes. The proposed method has been designed with robotics applications in mind to enhance their autonomy and decision-making capabilities in real-world scenarios.
Scholar articles
P BARZDINS, I PRETKALNINS, G Barzdins - Baltic Journal of Modern Computing, 2024