match arbitrary anatomical landmarks between two radiological images (e.g. CT, MR, X-ray, etc.)

SAM stands for "Self-supervised Anatomical eMbeddings", which is different from Meta's "Segment Anything Model". Please find our papers:

  • Xiaoyu Bai, Fan Bai, Xiaofei Huo, Jia Ge, Jingjing Lu, Xianghua Ye, Ke Yan, Yong Xia, "SAMv2: A Unified Framework for Learning Appearance, Semantic and Cross-Modality Anatomical Embeddings". 2023 arXiv)
  • Ke Yan, Jinzheng Cai, Dakai Jin, Shun Miao, Dazhou Guo, Adam P. Harrison, Youbao Tang, Jing Xiao, Jingjing Lu, Le Lu, "SAM: Self-supervised Learning of Pixel-wise Anatomical Embeddings in Radiological Images". IEEE Trans. on Medical Imaging, 2022 (arXiv)
SAM can be used to match arbitrary anatomical landmarks between two radiological images (e.g. CT, MR, X-ray, etc.) You can pick one point in any anatomy from one image, and then use SAM to detect it in all other images. Its applications include but are not limited to:
  • Lesion tracking in longitudinal data;
  • One-shot universal anatomical landmark detection;
  • Using the landmarks to help to align two images, e.g. for registration (SAME, SAMConvex);
  • Using the landmarks to extract anatomy-aware features (phase recognition);
  • Using the cross-image matching to help self-supervised learning (Alice); etc.

Documentation: https://github.com/alibaba-damo-academy/self-supervised-anatomical-embedding-v2
Source: https://github.com/alibaba-damo-academy/self-supervised-anatomical-embedding-v2
Jupyter: available as a jupyter kernel on https://max-jhub.desy.de
Maxwell: module load maxwell mdlma/sam