2026 Proffered Presentations
S017: CLICK, PROMPT, MODEL: FAST AI-DRIVEN ANATOMICAL AND SURGICAL 3D DESIGN - BUT NOT WITHOUT YOU
Chiara Angelini, MD; Marco Obersnel, MD; Hao Tang, MD; Roberto Rodriguez Rubio, MD; UCSF
Introduction: Creating accurate three-dimensional (3D) models is a useful tool in surgical planning, education, and anatomical research. Traditionally, two main approaches are used: photogrammetry, which requires capturing and processing multiple high-quality images, and manual 3D modeling from scratch, which is time-consuming and demands advanced design skills. Recently, AI-assisted tools have emerged, allowing users to generate a first 3D draft starting from a single image and natural language prompts. These systems promise to accelerate model creation and lower the technical barrier for entry, while still allowing iterative refinement.
Methods: We tested two AI-based tools.
Tripo AI v3.0 (VAST, Dongcheng, China) generates 3D meshes directly from a single image and allows basic iterative refinement through text prompts. This step provides a quick volumetric approximation of the target object, which can then be downloaded in a standard format as .glb or .obj.
Blender MCP allows the connection between Blender v4.5 (Blender Foundation, Amsterdam, Netherlands) and Claude (Anthropic PBC, San Francisco, USA) through the Model Context Protocol (MCP). This protocol allows Claude to directly interact with Blender, execute modeling commands, and iteratively modify the mesh in real time based on natural language instructions. This integration enables a more interactive design process compared to prompt-only approaches, giving the user more granular control over geometry and topology within Blender.
Both tools were evaluated for responsiveness, ease of use, and suitability for downstream manual refinement.
Results: Both Tripo AI and Blender MCP successfully generated a first-pass 3D model in minutes starting from a single image. Tripo AI produced usable meshes but its interface for iterative refinement was limited and less intuitive, often requiring multiple attempts to approximate the desired shape. Claude + Blender MCP was more responsive, enabling real-time adjustments and giving users more freedom to edit the mesh immediately after generation.
Despite the promising results, neither approach yielded models ready for direct use: manual intervention with standard 3D modeling tools (retopology, scaling, mesh cleanup) was always required. Nonetheless, starting from an AI-generated mesh greatly reduced total modeling time compared to creating models from scratch. Furthermore, the AI-generated output provided excellent volumetric references and was particularly useful for generating automatic labeling and segmentation. Although these labels were not always accurate, they were easy to edit and improved overall workflow efficiency.
Conclusions: AI-assisted 3D model generation represents a valid and time-saving alternative to both photogrammetry and manual modeling from scratch. While current tools cannot yet deliver definitive, ready-to-use models, they provide a valuable starting point that significantly reduces manual workload. Users still require basic 3D modeling skills to refine the final geometry, but the combination of AI generation and manual editing results in a faster, more efficient pipeline. This hybrid workflow is especially suited for applications where rapid prototyping and volumetric references are more important than perfect accuracy.



