2026 Proffered Presentations
S076: INTERPRETABLE AI-DRIVEN IDENTIFICATION OF THE BLINK REFLEX R1 IN CEREBELLOPONTINE ANGLE SURGERY
Jihad Abdelgadir1; Tanner J Zachem2; Syed M Adil2; Holly Johnson2; Kent K Yamamoto2; Aatif Husain2; Patrick J Codd2; Ali Zomorodi2; C. Rory Goodwin2; 1University of Utah; 2Duke University
Introduction: Safeguarding the facial nerve remains a central priority during cerebellopontine angle (CPA) surgery. Conventional intraoperative neuromonitoring (IONM) methods such as triggered and free-running electromyography provide valuable information, but are hindered by surgical disruptions, nonspecific signals, or delayed feedback. The blink reflex, specifically the early-latency R1 component, has a unique ability to evaluate central and peripheral facial nerve pathways without requiring visualization of the facial nerve. To address the need for real-time interpretation, we have developed an interpretable machine learning algorithm capable of detecting blink reflex responses during CPA surgery.
Methods: We retrospectively obtained 12,795 stimulated EMG traces collected from 71 patients undergoing CPA procedures, including tumor resection and microvascular decompression. Recordings were annotated by IONM experts and labeled as R1 or non-R1. A Sparse Mixture of Learned Kernels (SMoLK) model was used to provide a low-parameter, interpretable architecture. Unlike traditional machine learning methods, the reasoning behind classification for this model can be directly understood. Model performance was then evaluated via 5-fold patient-stratified cross validation and tested on a held-out cohort. Area under the receiver operating curve (AUROC) was used as the primary performance metric.
Results: On internal validation, the model had an AUROC of 0.857 (95% CI: 0.848-0.865). On the held-out test set the model had an AUROC of 0.907 (95% CI: 0.895–0.919) (Fig 1A). For the test set, the model had a sensitivity of 91.1%, specificity of 79.4%, and accuracy of 81.5% (at Youden’s threshold). Learned kernel visualizations were used to analyze model behavior, and learned kernels highly co-localized with R1 features supporting model interpretability (Fig 1B).

Figure 1: A) AUROC Curve for Internal Cross Validation Cohort (Gray) and External Test Cohort (Blue). B) Learned Kernel Visualizations. Red regions demonstrate high R1 predictive features.
Conclusion: Our model enables automated, real-time identification of blink reflex responses during CPA surgery in an interpretable manner. By enhancing facial nerve monitoring without interrupting the operative workflow, this method is a helpful adjunct to contemporary methods. Future directions include prospective validation at external centers and exploration into its role in predicting long-term facial nerve outcomes.
