FedSegNet: A Federated Learning Framework for 3D Medical Image Segmentation

Authors

  • Sidra Fareed School of Information and Software Engineering, University of Electronic Science and Technology of China, Jianshe North Road, Chengdu Sichuan, China Author
  • Ding Yi School of Information and Software Engineering, University of Electronic Science and Technology of China, Jianshe North Road, Chengdu Sichuan, China Author
  • Babar Hussain School of Information and Software Engineering, University of Electronic Science and Technology of China, Jianshe North Road, Chengdu Sichuan, China Author
  • Subhan Uddin School of Information and Software Engineering, University of Electronic Science and Technology of China, Jianshe North Road, Chengdu Sichuan, China Author
  • Aqsa Arif School of Information and Software Engineering, University of Electronic Science and Technology of China, Jianshe North Road, Chengdu Sichuan, China Author
  • Amir Nazar Tajoor Department of Computer Science, University Of Buner, Sowari Road, Buner, Pakistan Author

DOI:

https://doi.org/10.64229/ttnjjp90

Keywords:

Federated Learning, , 3D Medical Image Segmentation, Transformer-based U-Net, , Adaptive Aggregation, Privacy-Preserving AI

Abstract

Medical image segmentation plays a vital role in diagnostic imaging and treatment planning, especially for volumetric modalities such as MRI and CT scans. However, training high-performance deep learning models for 3D medical image segmentation requires large, annotated datasets, which are often siloed due to strict privacy laws like HIPAA and GDPR. Federated Learning (FL) offers a decentralized solution that enables collaborative training without sharing raw patient data, but its effectiveness is hindered by challenges such as data heterogeneity, communication overhead, and model degradation on non-IID datasets. In this study, we propose FedSegNet, a novel federated learning framework tailored for 3D medical image segmentation. FedSegNet integrates a Transformer-based U-Net architecture for capturing both local and global spatial features and introduces an Adaptive Aggregation Mechanism (AAM) to dynamically weigh client updates based on data quality, performance, and divergence. To reduce communication costs, the framework employs gradient sparsification and quantization techniques. We evaluate FedSegNet on multi-institutional datasets including BraTS, LiTS, and ACDC, using metrics such as Dice Similarity Coefficient and Hausdorff Distance. Results show that FedSegNet achieves up to 7.2% improvement in segmentation accuracy and 38% reduction in communication cost compared to existing methods, demonstrating its potential for secure, decentralized medical AI applications.

References

[1]G. Kaissis, A. Ziller, J. Passerat-Palmbach, T. Ryffel, D. Usynin, A. Trask et al., “Secure, privacy-preserving and federated machine learning in medical imaging,” Nature Machine Intelligence, vol. 5, no. 4, pp. 305–318, 2023.

[2]B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas,“Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.

[3]Y. Ding, X. Qin, M. Zhang, J. Geng, D. Chen, F. Deng, and C. Song, “Rlsegnet: A medical image segmentation network based on reinforcement learning,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 20, no. 4, pp. 2565–2576, 2022.

[4]Y. He, Y. Xue, X. Huang, and S. Zhang, “Med3d: Federated learning for multisite 3d medical imaging with adaptive aggregation,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 8, pp. 3987–3998, 2023.

[5]S. K. Zhou, H. Greenspan, C. Davatzikos, J. S. Duncan, B. Van Ginneken, A. Madabhushi et al., “A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises,” Proceedings of the IEEE, vol. 109, no. 5, pp. 820–838, 2021.

[6]L. Yang, W. Chen, K. Tan, and Y. Zhao, “Unifedseg: A universal transformer framework for federated medical image segmentation,” Medical Image Analysis, vol. 88, p. 102734, 2024.

[7]A. Diaz-Pinto, P. Mehta, S. Alle, M. Asad, R. Brown, V. Nath et al., “Deepedit: Deep editable learning for interactive segmentation of 3d medical images,” in MICCAI Workshop on Data Augmentation, Labelling, and Imperfections. Springer, 2022, pp. 11–21.

[8]A. Gupta, S.-H. Lin, H. Bai, and D. Sharma, “Federated learning for 3d medical image segmentation: A comprehensive survey,” Computer Methods and Programs in Biomedicine, vol. 238, p. 107563, 2024.

[9]A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez et al., “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.

[10]J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang et al., “Transunet: Transformers make strong encoders for medical image segmentation,” arXiv preprint arXiv:2102.04306, 2021.

[11]H. Wang, D. Yoo, R. Chen, L. Yu, and P.-A. Heng, “Fedvit: Federated vision transformer for covid-19 lesion segmentation in ct scans,” IEEE Transactions on Medical Imaging, vol. 42, no. 7, pp. 1945–1958, 2023.

[12]F. Tang, B. Nian, Y. Li, J. Yang, L. Wei, and S. K. Zhou, “Mambamim: Pre-training mamba with state space token-interpolation,” arXiv preprint arXiv:2408.08070, 2024.

[13]M. J. Sheller, B. Edwards, G. A. Reina, J. Martin, S. Pati, A. Kotrotsou et al., “Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data,” Scientific reports, vol. 10, no. 1, p. 12598, 2020.

[14]Y. Wu, T. Chen, Z. Wu, and F. Yu, “Efficientfl: Communication-efficient federated learning for high-resolution 3d medical images,” Medical Physics, vol. 50, no. 6, pp. 3421–3435, 2023.

[15]X. Gong, A. Sharma, S. Karanam, Z. Wu, T. Chen, D. Doermann, and A. Innanje, “Preserving privacy in federated learning with ensemble cross-domain knowledge distillation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 11, 2022, pp. 11891–11899.

[16]B. Hussain, J. Guo, S. Fareed, and S. Uddin, “Robotics for space exploration: From mars rovers to lunar missions,” International Journal of Ethical AI Applications, vol. 1, no. 1, pp. 1–??, 2025, https://eaa.cultechpub.com/index.php/eaa/ article/view/1.

[17]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.

[18]J. E. Ding, S. Yang, A. Zilverstand, K. R. Kulkarni, X. Gu, and F. Liu, “Spatial craving patterns in marijuana users: Insights from fmri brain connectivity analysis with high-order graph attention neural networks,” IEEE Journal of Biomedical and Health Informatics, 2024.

[19]J. Liu, Y. Wang, H. Zhang et al., “Fedslam: Federated learning for 3d medical image segmentation with sparse labels,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12345–12355.

[20]L. Kwak and H. Bai, “The role of federated learning models in medical imaging,” Radiology: Artificial Intelligence, vol. 5, no. 3, p. e230136, 2023.

[21]P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji et al., “Advances and open problems in federated learning,” Foundations and trends® in machine learning, vol. 14, no. 1–2, pp. 1–210, 2021.

[22]L. Zhang, Y. Cheng, L. Liu, C.-B. Schonlieb, and A. I. Aviles-Rivero, “Biophysics¨ informed pathological regularisation for brain tumour segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2024, pp. 3–13.

[23]W. Li, Y. Wang, H. Zhang, and P.-A. Heng, “Federated learning for multi-modal 3d medical image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.

[24]Y. Ding, L. Li, W. Wang, and Y. Yang, “Clustering propagation for universal medical image segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 3357–3369.

[25]X. Tang, B. Zhang, B. S. Knudsen, and T. Tasdizen, “Duoformer: Leveraging hierarchical visual representations by local and global attention,” arXiv preprint arXiv:2407.13920, 2024.

[26]T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE signal processing magazine, vol. 37, no. 3, pp. 50–60, 2020.

[27]X. Huang, Y. Li, Y. Zhang et al., “Fedmamba: State-space models for federated medical image segmentation,” in International Conference on Learning Representations, 2023.

[28]Y. Wang, J. Liu, H. Zhang, and P.-A. Heng, “Flatten-unet: Hierarchical transformers for federated 3d medical image segmentation,” IEEE Transactions on Medical Imaging, 2024.

[29]L. Peng, N. Wang, N. Dvornek, X. Zhu, and X. Li, “Fedni: Federated graph learning with network inpainting for population-based disease prediction,” IEEE Transactions on Medical Imaging, vol. 42, no. 7, pp. 2032–2043, 2022.

[30]Y. Xu, Z. He, J. Li, and Y. Yang, “Comfedseg: Communication-efficient federated learning for medical image segmentation,” in MICCAI. Springer, 2023, pp. 100– 110.

[31]Q. Dou, T. Y. So, M. Jiang, Q. Liu, V. Vardhanabhuti, G. Kaissis et al., “Federated deep learning for detecting covid-19 lung abnormalities in ct: a privacypreserving multinational validation study,” NPJ Digital Medicine, vol. 4, no. 1, p. 60, 2021.

Downloads

Published

2025-07-30

Issue

Section

Articles