Abstract

The scarce availability of labeled data makes multi-modal domain adaptation an interesting approach in medical image analysis. Deep learning-based registration methods, however, still struggle to outperform their non-trained counterparts. Supervised domain adaptation also requires labeled- or other ground truth data. Hence, unsupervised domain adaptation is a valuable goal, that has so far mainly shown success in classification tasks. We are the first to report unsupervised domain adaptation for discrete displacement registration using classifier discrepancy in medical imaging. We train our model with mono-modal registration supervision. For cross-modal registration no supervision is required, instead we use the discrepancy between two classifiers as training loss. We also present a new projected Earth Mover's distance for measuring classifier discrepancy. By projecting the 2D distributions to 1D histograms, the EMD L1 distance can be computed using their cumulative sums.

Christian N. Kruse et al., Multi-modal Unsupervised Domain Adaptation for Deformable Registration Based on Maximum Classifier Discrepancy, https://doi.org/10.1007/978-3-658-33198-6_47