posted on 2025-10-03, 15:19authored byL Zhao, J Yang, K Li, W Ding, Huiyu ZhouHuiyu Zhou
<p dir="ltr">To perform segmentation for the cell images of different modalites accurately, we should address issues of oversegmentation or under-segmentation caused by uncertainty variations in modal pixel distribution and cell morphology. Moreover, the problem of limited labeled data is studied in this work. Most previous methods lack global uncertainty information perception ability, can not obtain local uncertainty details, and are limited to the number of labeled data from multi-modal cell images. We introduce a novel framework that can accurately learn valuable information for multi-modal cell segmentation task with the data and modal uncertainty aware abilities. Firstly, an image fusion module is proposed that leverages a multi-branch structure, incorporating dilation convolution, regular convolution, and channel attention mechanism for saving global valuable information. Secondly, to obtain local boundaries from obscure and irregular uncertainty regions, a transformer-based model encoding strategy is developed for performing token selection and enhancement based on the feedback confidence score. This confidence score is computed based on the output of a teaching network that indicates most likely local boundary. Thirdly, a pseudo-label selection strategy is employed to improve the annotation quality of unlabeled images. We evaluated our method on three publicly available datasets with different cell modalites and performed a quantitative comparison with the previous fifteen methods. Our method achieved better performance than others. This study has important implications for the improvements of clinical applications including diagnostic accuracy and decision-making reliability.</p>
History
Author affiliation
College of Science & Engineering
Comp' & Math' Sciences