2026 Proffered Presentations
S181: BEYOND PHOTOGRAMMETRY: CREATING HYPERREALISTIC NEUROANATOMICAL MODELS WITH RADIANCE FIELDS
Roberto Rodriguez Rubio, MD; Jose Mariano Navarrete; Marco Obersnel, MD; Chiara Angelini, MD; Hao Tang, MD; Ivan El-Sayed; UCSF
Objective: Photogrammetry has become a cornerstone for creating three-dimensional (3D) digital models in surgical neuroanatomy, preserving complex dissections for education and research. Recently, AI-powered radiance fields have emerged, building upon this foundation to achieve unprecedented levels of realism and interactivity. This technical note outlines the modern workflow for two leading radiance field techniques—Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS)—and reviews the ecosystem of tools for processing, rendering, editing, and sharing these advanced 3D assets.
Methods: The workflow begins with multi-angle image acquisition, followed by AI-driven volumetric processing. Processing can be performed locally using command-line interfaces or applications like NeRFStudio, or through cloud-based platforms (Luma, Polycam, Kiri Engine). Rendering these assets is achieved on local machines via dedicated viewers and game engines (Unity, Unreal Engine) or streamed from the cloud. For immersive interaction, models can be deployed to Virtual Reality headsets (Meta Quest 3) and the Apple Vision Pro using specialized viewers (Gracia, MetalSplatter). A growing number of tools support editing (Supersplat, PostShot) and interactive online sharing (Spline, Luma, StorySplat), creating a comprehensive pipeline from capture to distribution.
Results: NeRF generates implicit, high-fidelity neural assets that excel at representing complex, semi-transparent structures and view-dependent lighting effects, making them ideal for detailed anatomical study (Fig. 1). In contrast, 3DGS produces an explicit point cloud of 3D Gaussians, resulting in a lightweight .PLY file format. This structure enables real-time, high-frame-rate rendering, facilitating fluid interaction, surgical simulation, and seamless integration into existing 3D software pipelines (Fig. 2). The key distinction lies in NeRF’s superior visual fidelity for static scenes versus 3DGS’s unmatched performance and versatility for real-time applications.
Conclusion: AI-powered radiance fields mark a significant evolution from traditional photogrammetry in surgical neuroanatomy. By offering both the hyper-realism of NeRF and the real-time interactivity of 3DGS, these technologies dramatically enhance visuospatial understanding for surgical training and preoperative planning. The rapidly expanding ecosystem of accessible tools for capture, editing, and deployment is democratizing the creation of these next-generation digital twins, promising a new era of immersive surgical education.


