• Skip to main content
  • Skip to header right navigation
  • Skip to site footer

  • Twitter
  • YouTube
NASBS

NASBS

North American Skull Base Society

  • Home
  • About
    • Mission Statement
    • Bylaws
    • NASBS Board of Directors
    • Committees
      • Committee Interest Form
    • NASBS Policy
    • Donate Now to the NASBS
    • Contact Us
  • Meetings
    • 2027 Annual Meeting
    • Abstracts
      • 2026 Call for Abstracts
      • NASBS Poster Archives
      • 2025 Abstract Awards
    • 2026 Recap
    • NASBS Summer Course
    • Meetings Archive
    • Other Skull Base Surgery Educational Events
  • Resources
    • Member Survey Application
    • NASBS Travel Scholarship Program
    • Research Grants
    • Fellowship Registry
    • The Rhoton Collection
    • Webinars
      • Research Committee Workshop Series
      • ARS/AHNS/NASBS Sinonasal Webinar
      • Surgeon’s Log
      • Advancing Scholarship Series
      • Trials During Turnover: Webinar Series
    • NASBS iCare Pathway Resources
    • Billing & Coding White Paper
  • Membership
    • Join NASBS
    • Membership Directory
    • Multidisciplinary Teams of Distinction
    • NASBS Mentorship Program
  • Fellowship Match
    • NASBS Neurosurgery Skull Base Fellowship Match Programs
    • NASBS Neurosurgery Skull Base Fellowship Match Application
  • Journal
  • Login/Logout

2026 Proffered Presentations

2026 Proffered Presentations

 

← Back to Previous Page

 

S010: MERGING REALITY AND VIRTUALITY: PHOTOREALISTIC INTEGRATION OF 3D ANATOMICAL MODELS INTO REAL PHOTOGRAPHS
Marco Obersnel, MD; Chiara Angelini, MD; Hao Tang, MD; Roberto Rodriguez Rubio, MD; UCSF

Introduction: Traditional anatomical learning relies heavily on two-dimensional (2D) illustrations, radiological images, and cadaveric dissections. However, surgical anatomy is inherently three-dimensional (3D), and complex spatial relationships are often difficult to convey with flat images. While digital 3D models offer superior spatial accuracy, studying them requires dedicated visualizers, constant user interaction (rotation, zooming, slicing), and sometimes powerful hardware, which can be time-consuming and cumbersome. Paradoxically, static 2D images remain easier and faster to read, annotate, integrate into publications, and print. It is therefore important to determine how the spatial advantages of 3D anatomy can be preserved while maintaining the immediacy and practicality of 2D images.

We propose a reproducible, low-cost workflow to generate photorealistic composite images that integrate 3D models into real photographs. This approach allows the user to highlight specific anatomical structures while keeping the realistic context of the photograph, effectively combining the clarity of 2D images with the depth and spatial information of 3D models.

Methods: A multi-step pipeline was designed using open-source tools. Photographs of anatomical specimens were acquired under controlled lighting, and camera parameters (focal length, sensor size, aperture, and focal distance) were recorded. A photogrammetry model (PGMm) of the anatomy was generated with a mobile app (Polycam). Perspective and camera position were reconstructed using fSpy, which allowed exporting a calibrated virtual camera directly into Blender 4.5.

The target 3D model (such as arteries or cranial nerves) was imported into the Blender scene, aligned, and scaled according to anatomical landmarks visible in the photograph. Lighting was recreated in Blender to match the original photo by positioning light sources based on observed shadows and highlights. The PGMm was assigned as a shadowcatcher, enabling the capture of realistic shadows without rendering the plane itself. The final render included only the 3D object and its shadow, which was then composited over the original photograph to create a seamless, photorealistic integration.

Results: The resulting composite images were visually coherent, with 3D models appearing naturally embedded in the real scene. Matching perspective and lighting through fSpy and Blender was found to be intuitive and reproducible after minimal training. The workflow was user-friendly, requiring basic Blender skills, and could be completed in less than 15 minutes per image once camera parameters were known. The resulting images provided clear volumetric references, improved depth perception, and allowed selective highlighting of relevant anatomical details. Transparency, color coding, and labeling could be easily adjusted, making the tool highly flexible for educational and research purposes.

Conclusions: This workflow offers a practical, low-cost solution for combining photographs and 3D models into photorealistic, anatomically accurate images. It enables the creation of customized educational material, enhances spatial understanding, and provides a powerful tool for surgical training and research communication. Although the process requires basic 3D modeling and rendering skills, it significantly reduces the complexity compared to creating fully synthetic scenes or performing full photogrammetry. Future developments may integrate automatic camera calibration and lighting estimation, further streamlining the process and making it even more accessible to anatomists, surgeons, and educators.

Figure1

 

 

 

 

← Back to Previous Page

Copyright © 2026 North American Skull Base Society · Managed by BSC Management, Inc · All Rights Reserved