2026 Proffered Presentations
S055: PREDICTING MENINGIOMA GRADE FROM STIMULATED RAMAN HISTOLOGY WITH DEEP LEARNING
Richard Song; Osaama H Khan, MD, MSc, FRCSC, FAANS, FACS; Northwestern Medicine Department of Neurological Surgery
Introduction: Meningiomas are the most common primary intracranial tumors in adults and are classified into three grades by the World Health Organization. Following gross total resection, grade 1 meningiomas have a recurrence rate between 7-25%, while grade 2 and grade 3 meningiomas recur at higher rates of about 29-52% and 50-94%, respectively (Backer-Grøndahl et al., 2012). Therefore, treating grade 2 and 3 meningiomas may require more extensive surgical strategies, such as supratotal resection. Meningioma grading relies on H&E staining, immunohistochemistry, and molecular profiling. These methods require tissue processing and expert review, limiting intraoperative utility.
Stimulated Raman Histology (SRH) is a technique that reconstructs H&E-like images by detecting chemical vibrations of CH2 and CH3 bonds, measured at Raman shifts of 2845 cm-1 and 2940 cm-1. SRH can generate high-quality images within minutes, raising the possibility of intraoperative pathological assessment. In this study, we investigate whether deep learning can accurately predict grade 1 versus grade 2 or 3 meningiomas from SRH images of resected tissue samples.
Methods: All patients provided informed consent to an IRB-approved tumor bank. Patients who underwent resection for meningioma between April 2018 and August 2025 at a single institution were included, with final pathology confirming tumor grade. Surgical specimens collected prior to April 2024 were scanned as frozen sections using the Invenio Imaging NIO Laser Imaging System. After April 2024, SRH was performed intraoperatively. All whole-slide images were subdivided into non-overlapping 300 × 300 pixel patches for model input.
We applied transfer learning to finetune FastGlioma, a pretrained SRH visual foundation model, for meningioma grade classification (Kondepudi et al., 2025). The FastGlioma backbone consists of a patch encoder and a whole-slide transformer. The patch encoder was trained via a hierarchical discriminative learning task, generating embeddings for each patch. These embeddings were then processed by the whole-slide transformer, which optimizes a self-supervised objective at the whole-slide level.
The 512-dimensional embeddings produced by FastGlioma were then used to finetune a two–hidden-layer neural network trained with binary cross-entropy loss. Training was optimized with a learning rate of 5 × 10-4, dropout of 0.2, batch size of 4, weight decay of 1 × 10-3, and the Adam optimizer. Model performance was evaluated using 5-fold cross-validation to ensure robustness.
Results: 57 patients were included (median 61 years, range 21-80 years, 64.9% female), consisting of 30 grade 1, 24 grade 2, and 3 grade 3 meningiomas. Representative SRH images are shown in Figure 1. Across 5-folds, the model achieved an average training accuracy of 0.89 and validation accuracy of 0.83, with corresponding F1 scores of 0.86 and 0.78. The average area under the ROC curve was 0.90 ± 0.12. Principal component analysis of FastGlioma embeddings revealed clear separation between low- and high-grade tumors (Figure 2).
Conclusions: SRH distinguished low- from high-grade meningiomas with high accuracy, supporting its potential for intraoperative use. Limitations include small sample size and variability from frozen versus intraoperative tissue scanning. Future work should validate in larger cohorts and integrate SRH with molecular markers.


