• Skip to main content
  • Skip to header right navigation
  • Skip to site footer

  • Twitter
  • YouTube
NASBS

NASBS

North American Skull Base Society

  • Home
  • About
    • Mission Statement
    • Bylaws
    • NASBS Board of Directors
    • Committees
      • Committee Interest Form
    • NASBS Policy
    • Donate Now to the NASBS
    • Contact Us
  • Meetings
    • 2027 Annual Meeting
    • Abstracts
      • 2026 Call for Abstracts
      • NASBS Poster Archives
      • 2025 Abstract Awards
    • 2026 Recap
    • NASBS Summer Course
    • Meetings Archive
    • Other Skull Base Surgery Educational Events
  • Resources
    • Member Survey Application
    • NASBS Travel Scholarship Program
    • Research Grants
    • Fellowship Registry
    • The Rhoton Collection
    • Webinars
      • Research Committee Workshop Series
      • ARS/AHNS/NASBS Sinonasal Webinar
      • Surgeon’s Log
      • Advancing Scholarship Series
      • Trials During Turnover: Webinar Series
    • NASBS iCare Pathway Resources
    • Billing & Coding White Paper
  • Membership
    • Join NASBS
    • Membership Directory
    • Multidisciplinary Teams of Distinction
    • NASBS Mentorship Program
  • Fellowship Match
    • NASBS Neurosurgery Skull Base Fellowship Match Programs
    • NASBS Neurosurgery Skull Base Fellowship Match Application
  • Journal
  • Login/Logout

2026 Proffered Presentations

2026 Proffered Presentations

 

← Back to Previous Page

 

S017: CLICK, PROMPT, MODEL: FAST AI-DRIVEN ANATOMICAL AND SURGICAL 3D DESIGN - BUT NOT WITHOUT YOU
Chiara Angelini, MD; Marco Obersnel, MD; Hao Tang, MD; Roberto Rodriguez Rubio, MD; UCSF

Introduction: Creating accurate three-dimensional (3D) models is a useful tool in surgical planning, education, and anatomical research. Traditionally, two main approaches are used: photogrammetry, which requires capturing and processing multiple high-quality images, and manual 3D modeling from scratch, which is time-consuming and demands advanced design skills. Recently, AI-assisted tools have emerged, allowing users to generate a first 3D draft starting from a single image and natural language prompts. These systems promise to accelerate model creation and lower the technical barrier for entry, while still allowing iterative refinement.

Methods: We tested two AI-based tools.

Tripo AI v3.0 (VAST, Dongcheng, China) generates 3D meshes directly from a single image and allows basic iterative refinement through text prompts. This step provides a quick volumetric approximation of the target object, which can then be downloaded in a standard format as .glb or .obj.

Blender MCP allows the connection between Blender v4.5 (Blender Foundation, Amsterdam, Netherlands) and Claude (Anthropic PBC, San Francisco, USA) through the Model Context Protocol (MCP). This protocol allows Claude to directly interact with Blender, execute modeling commands, and iteratively modify the mesh in real time based on natural language instructions. This integration enables a more interactive design process compared to prompt-only approaches, giving the user more granular control over geometry and topology within Blender.

Both tools were evaluated for responsiveness, ease of use, and suitability for downstream manual refinement.

Results: Both Tripo AI and Blender MCP successfully generated a first-pass 3D model in minutes starting from a single image. Tripo AI produced usable meshes but its interface for iterative refinement was limited and less intuitive, often requiring multiple attempts to approximate the desired shape. Claude + Blender MCP was more responsive, enabling real-time adjustments and giving users more freedom to edit the mesh immediately after generation.

Despite the promising results, neither approach yielded models ready for direct use: manual intervention with standard 3D modeling tools (retopology, scaling, mesh cleanup) was always required. Nonetheless, starting from an AI-generated mesh greatly reduced total modeling time compared to creating models from scratch. Furthermore, the AI-generated output provided excellent volumetric references and was particularly useful for generating automatic labeling and segmentation. Although these labels were not always accurate, they were easy to edit and improved overall workflow efficiency.

Conclusions: AI-assisted 3D model generation represents a valid and time-saving alternative to both photogrammetry and manual modeling from scratch. While current tools cannot yet deliver definitive, ready-to-use models, they provide a valuable starting point that significantly reduces manual workload. Users still require basic 3D modeling skills to refine the final geometry, but the combination of AI generation and manual editing results in a faster, more efficient pipeline. This hybrid workflow is especially suited for applications where rapid prototyping and volumetric references are more important than perfect accuracy.

 

← Back to Previous Page

Copyright © 2026 North American Skull Base Society · Managed by BSC Management, Inc · All Rights Reserved