This study aimed to develop a multi-scene model, Positioning and Focus Network (PFNet), for automatically segmenting acute vertebral compression fractures (VCFs) from spine radiographs. Conducted across five hospitals from 2016 to 2019, the study included both acute VCF patients and healthy controls.
PFNet was trained with radiographs from Hospitals A and B, and tested on datasets from Hospitals A-E. The model achieved impressive segmentation accuracies (99.93%, 98.53%, 99.21%, and 100%) across validation and test datasets. When compared to other approaches, PFNet outperformed them on all metrics, achieving the highest values for all metrics. Additionally, qualitative analyses and gradient-weighted class activation mapping (Grad-CAM) highlighted the PFNet model’s interpretability and effectiveness.
The results suggest that PFNet is a highly accurate tool for precise preoperative and intraoperative segmentation of acute VCFs, offering significant improvements in clinical settings.
Key points:
- This study developed the first multi-scene deep learning model capable of segmenting acute VCFs from spine radiographs.
- The model’s architecture consists of two crucial modules: an attention-guided module and a supervised decoding module.
- The exceptional generalization and consistently superior performance of our model were validated using multicenter external test datasets.
Authors: Hao Zhang, Genji Yuan, Ziyue Zhang, Xiang Guo, Ruixiang Xu, Tongshuai Xu, Xin Zhong, Meng Kong, Kai Zhu & Xuexiao Ma