2025 Poster Presentations
P425: LEVERAGING MACHINE LEARNING FOR INTUITIVE 3D VISUALIZATION IN SKULLBASE NEUROSURGERY TRAINING
Kiefer J Forseth, MD, PhD1; Michael G Brandel, MD1; Plonsker H Jillian, MD2; Jeffrey A Steinberg, MD1; Michael Levy1; Alexander A Khalessi, MD, MBA1; 1University of California San Diego; 2Lurie Children's Hospital of Chicago
Skullbase neurosurgery requires a trainee to develop an intimate familiarity with 3-dimensional anatomy of the target region as well as the approach corridor. This begins with careful study of 2-dimensional representations including stylized drawings, exemplar dissections, and intraoperative videos. The transition from 2 to 3-dimensional understanding of this anatomy is challenging, often accomplished through painstaking cadaveric practice.
Our objective was to translate the 2-dimensional teaching manuals of skullbase neurosurgery to an intuitive 3-dimesional format that directly mimics the surgeon’s vantage and mobility.
We adapted recent advances in machine learning to accomplish this aim. Visual information from a cadaveric dissection scene was collected with a variety of cameras – iPhone, GoPro, endoscope, or microscope. Approximately 200 pictures were sufficient to generate 4K-quality models. The images were spatially registered with structure-from-motion, estimating camera pose & intrinsic parameters. These features served as inputs for 3D gaussian splatting (3DGS) which enables novel-view synthesis by optimizing volumetric representation using gaussian primitives defined by position, opacity, and anisotropic covariance. All models were trained and rendered with a NVIDIA RTX 4070.
With cadaveric specimens, we completed 10 classic neurosurgical approaches and captured models at 3 points in each dissection: pre-incision, dural exposure, and final intracranial view. The real-time rasterization of 3DGS enabled direct rendering of these models in Virtual Reality with a Meta Quest 3 headset. The trainee could thus move freely around the dissection table and directly manipulate the field with hand controllers. The casual capture and low computational costs allowed trainees to archive their own dissections for later review. This also facilitated faculty review of trainee’s dissections in detail without being physically present in the lab.
We endeavor to accelerate and enrich the training of neurosurgical residents in complex neuroanatomy by leveraging recent advances in computer vision, creating an interactive 3D manual of skullbase dissection.