posted on 2025-09-09, 14:39authored byRabia Saleem, Ashiq AnjumAshiq Anjum, Bo Yuan, Lu Liu
<p dir="ltr">Siamese Neural Networks (SNNs) have shown promise in addressing a variety of tasks, even with limited data availability. However, their adoption is hindered by the lack of transparency in their decision-making processes. A key challenge in explaining SNNs lies in the absence of an inverse mapping between high-dimensional input feature vectors and the low-dimensional embedding space. Therefore, computing direct distances between input features becomes meaningless. Existing autoencoder-based explanation methods face several limitations. These include poor image reconstruction quality due to insufficient data and the omission of final distance layer of the SNN during the explanation process. While the Siamese Network Explainer (SINEX) can explain audio and grayscale images, it does not support RGB images. To overcome these challenges, we propose a method called Features Distance-based eXplanation (FDbX). This approach identifies salient features using ridge regression, trained on perturbed SLIC-segmented images. To enhance the selection of important features, we incorporate Bayesian analysis, which assigns importance scores to features. To provide a comprehensive explanation of the decision route, we construct a mathematical model that represents important features and their Hamming distances as a bipartite graph. In this graph, nodes represent features and edges denote distances between feature pairs. The resulting explanation heatmaps highlight critical image segments, offering more intuitive and visually informative explanations than existing methods. We evaluate stability and faithfulness of our method using stability indices such as $$R^2$$ and mean squared error. To the best of our knowledge, this is the first work to introduce Variable and Coefficient Stability Indices for image datasets.</p>
History
Author affiliation
College of Science & Engineering
Comp' & Math' Sciences