2026 Proffered Presentations
S069: DEEP LEARNING BASED VOLUMETRIC MRI SEGMENTATION ALGORITHM FOR VESTIBULAR SCHWANNOMA MONITORING
Tanner J Zachem1; Syed M Adil1; Ethan Castellino1; Luis Cruz Mondragon1; Kristian Banovic1; Ashley Lin1; Jihad Abdelgadir2; Patrick J Codd1; Ali Zomorodi1; C. Rory Goodwin1; Evan Calabrese1; 1Duke University; 2University of Utah
Introduction: Vestibular schwannomas account for approximately 8% of primary intracranial lesions and are frequently monitored with long-term serial MRI. Accurate volumetric assessment of tumor size is crucial in both the preoperative and postoperative settings to guide clinical decisions regarding surgical timing, stereotactic radiosurgery, or continued surveillance. However, many vestibular schwannomas are small at presentation and subtle changes in total tumor volume can be overlooked when assessed by diameter-based measurements or visual inspection alone. In the postoperative setting, interpretation becomes more complex as deformation of residual or recurrent tumor tissue with the resection cavity may mask true tumor growth or mimic progression. For example, a small residual nodule may change in shape without a change in volume, leading to diagnostic uncertainty. Manual volumetric segmentation would be time-consuming, subject to inter-observer variability, and not feasible in clinical practice. Automated volumetric segmentation offers an opportunity to provide clinicians with standardized reproducible measurements of tumor volume and enable data-driven monitoring and treatment strategies.
Methods: We retrospectively collected 286 preoperative MRIs from 169 patients at our institution who underwent surgical resection for vestibular schwannoma. The dataset included 169 high-resolution T2-weighted (FIESTA/SPACE) scans and 117 T1-weighted contrast enhanced scans. Manual segmentations were provided by the study team for training and as ground truth for testing. Data were split into 80/20% training and testing cohorts. Our deep learning algorithm was a 3D SegResNet with single channel input, 32 convolutional filters, and 0.2 dropout rate. The model was trained for 3,000 epochs with dice as the primary performance metric.
Results: On the held-out 20% evaluation set, the model achieved a mean dice score of 0.851 (Figure 1). Qualitatively, the model performed well on a diverse range of lesions including small intracanicular lesions, large koos grade IV lesions, and lesions having cystic or multinodular components. Of note, the same model takes either high resolution T2 (FIESTA/SPACE) or T1c for this performance, making it more generalizable across sequences.

Figure 1: Example Segmentation of Large T2 Lesion. Top Left: Axial, Top Right: 3D Reconstruction of Segmentation Region, Bottom Left: Coronal, Bottom Right: Sagittal
Conclusion: We present a deep-learning based segmentation model for vestibular schwannomas that demonstrates strong initial performance on a single-institution preoperative dataset. The ability to accurately segment tumors from both T1c and high-resolution T2 sequences provides additional generalizability and utility.Given variable lesion sizes and dynamic anatomy, automated volumetric segmentation would decrease subjectivity in vestibular schwannoma decision-making. Future work will extend this model to postoperative imaging, prospective evaluation, and external validation with other institutions.
