• Skip to main content
  • Skip to header right navigation
  • Skip to site footer

  • Twitter
  • YouTube
NASBS

NASBS

North American Skull Base Society

  • Home
  • About
    • Mission Statement
    • Bylaws
    • NASBS Board of Directors
    • Committees
      • Committee Interest Form
    • NASBS Policy
    • Donate Now to the NASBS
    • Contact Us
  • Meetings
    • 2027 Annual Meeting
    • Abstracts
      • 2026 Call for Abstracts
      • NASBS Poster Archives
      • 2025 Abstract Awards
    • 2026 Recap
    • NASBS Summer Course
    • Meetings Archive
    • Other Skull Base Surgery Educational Events
  • Resources
    • Member Survey Application
    • NASBS Travel Scholarship Program
    • Research Grants
    • Fellowship Registry
    • The Rhoton Collection
    • Webinars
      • Research Committee Workshop Series
      • ARS/AHNS/NASBS Sinonasal Webinar
      • Surgeon’s Log
      • Advancing Scholarship Series
      • Trials During Turnover: Webinar Series
    • NASBS iCare Pathway Resources
    • Billing & Coding White Paper
  • Membership
    • Join NASBS
    • Membership Directory
    • Multidisciplinary Teams of Distinction
    • NASBS Mentorship Program
  • Fellowship Match
    • NASBS Neurosurgery Skull Base Fellowship Match Programs
    • NASBS Neurosurgery Skull Base Fellowship Match Application
  • Journal
  • Login/Logout

2026 Proffered Presentations

2026 Proffered Presentations

 

← Back to Previous Page

 

S112: OPTIMIZING ARTIFICIAL INTELLIGENCE - SEGMENTATION OF SINONASAL MASSES IN ENDOSCOPIC IMAGES THROUGH AUGMENTATION OF META'S SEGMENT ANYTHING MODEL
Kenan Ye; Lirit Levi; Mahdokht Manavi; Maxime Fieux; Axel Renteria; Jayakar Nayak; Zara M. Patel; Noel F. Ayoub; Peter H. Hwang; Michael T. Chang; Stanford

Objective: Accurate segmentation of sinonasal lesions is essential for developing AI-based applications in endoscopic procedures. Current segmentation models are developed primarily using convolutional neural network (CNN) architecture, and plateau at an accuracy of ~0.75.  Our objective is to develop an augmented Meta’s Segment Anything Model (SAM) model optimized for segmentation of sinonasal masses in endoscopy images.

Methods: We integrated SAM with a ResNet-50 CNN using cross-attention layers and trained the model on 1,242 endoscopy images from patients evaluated at an otolaryngology center (Figure 1). Images included tumors, polyps, or normal findings. Images were annotated and independently validated by 3 otolaryngologists. We compared performance of augmented SAM to nnUNet and zero-shot SAM using Dice similarity coefficient and Intersection-over-Union (IoU) metrics, applying an 8:1:1 train-validation-test split. For external validation, we tested augmented SAM on a publicly available dataset from another institution.

Results: Augmented SAM achieved a Dice of 0.90 and an IoU of 0.82 for polyp segmentation (Figure 2), outperforming nnUNet (Dice 0.74, IoU 0.60) and zero-shot SAM (DICE 0.58, IoU 0.43). For tumors (Figure 3), augmented SAM reached Dice 0.87 and IoU of 0.77, compared to nnUNet (Dice 0.73, IoU 0.58) and zero-shot SAM (Dice 0.61, IoU 0.45). On the external dataset, augmented SAM achieved a high accuracy for tumors (Dice 0.86) and polyps (Dice 0.90).

Conclusions: Augmented SAM significantly improves segmentation accuracy for sinonasal masses, outperforming existing benchmarks. This lays the potential groundwork for development of real-time, lesion-aware AI tools in rhinologic procedures

 

← Back to Previous Page

Copyright © 2026 North American Skull Base Society · Managed by BSC Management, Inc · All Rights Reserved