- Shap-E: Generating Conditional 3D Implicit Functions
- Zero-shot text-guided object generation with dream fields
- DREAMFUSION: TEXT-TO-3D USING 2D DIFFUSION
- Magic3D: High-Resolution Text-to-3D Content Creation
- TextDeformer: Geometry Manipulation using Text Guidance
- TextMesh: Generation of Realistic 3D Meshes From Text Prompts
- DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model
- DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models
- DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance
- AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control
- X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance
- Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
- Texture: Text-guided texturing of 3d shapes
- Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
- Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation
- CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout
- Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes
- Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
- SceneScape: Text-Driven Consistent Scene Generation
- Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
- 3D-CLFusion: Fast Text-to-3D Rendering with Contrastive Latent Diffusion
- Compositional 3D Scene Generation using Locally Conditioned Diffusion
- Text2Tex: Text-driven Texture Synthesis via Diffusion Models
- SKED: Sketch-guided Text-based 3D Editing
- Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation
- Zero3D: Semantic-Driven Multi-Category 3D Shape Generation
- Text-to-4d dynamic scene generation
- Clip-mesh: Generating textured meshes from text using pretrained image-text models
- Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
- Text2mesh: Text-driven neural stylization for meshes
- Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition
- DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models
- DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance
- AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control
- Texture: Text-guided texturing of 3d shapes
- Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
- Tango: Text-driven photorealistic and robust 3d stylization via lighting decomposition
- Text2Tex: Text-driven Texture Synthesis via Diffusion Models
- Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes
- SceneScape: Text-Driven Consistent Scene Generation
- Text2Room: Extracting Textured 3D Meshes from 2D Text-to-Image Models
- Text-to-4d dynamic scene generation
- Compositional 3D Scene Generation using Locally Conditioned Diffusion