3D Generation Survey
3D Generation Survey
综述:Advances in 3D Generation: A Survey
3D表征:Neural Representation
生成过程通常涉及用于创建3D模型和渲染2D图像的场景表示和可微分渲染算法。
【1】直接监督场景表示的3D模型
【2】将场景表示渲染成图像并监督生成的2D效果图
Explicit scene representation(显示表征)
Point Clouds(点云):
Surfels在计算机图形学中用于渲染点云(Splitting),可微分。
- Neural point-based graphics.
- Neural point cloud rendering via multi-plane projection.
- Synsin: End-to-end view synthesis from a single image.
- ......(这些方法通常将特征嵌入点云中,并将其变换到目标视图以解码颜色值,从而允许更准确和详细的场景重建)
- Ewa-splatting
- Learning efficient point cloud generation for dense 3d object reconstruction.
Meshes:
Multi-layer Representations:
Implicit Representations
NeRFs(广义)
- NeRF:Representing scenes as neural radiance fields for view synthesis
- Mip-nerf:A multiscale representation for anti-aliasing neural radiance fields
- Instant-ngp:Instant neural graphics primitives with a multiresolution hash encoding
- 3dgs:3d gaussian splatting for real-time radiance field rendering
Neural Implicit Surfaces
- NeuS:Learning neural implicit surfaces by volume rendering for multi-view reconstruction
- VolSDF:Volume rendering of neural implicit surfaces
Hybrid Representations
Voxel Grids
- Instant-NGP: Instant neural graphics primitives with a multiresolution hash encoding
Tri-plane
- TensoRF:Tensorial radiance fields
Hybrid Surface Representation
- DMTet:Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis
2D生成模型:Diffusion Models
Diffusion Models / Generative Artificial Intelligence
- DDPM:Denoising diffusion probabilistic models.
- LDMS(Latent Diffusion):High-resolution image synthesis with latent diffusion models
- IDDPM:Improved denoising diffusion probabilistic models
- Stable Diffusion:High-resolution image synthesis with latent diffusion models.
- Imagen:Photorealistic text-toimage diffusion models with deep language understanding
- Midjourney:Midjourney
- DALL-E 3 :OpenAI
GANs
- GAN:Generative adversarial nets
- Image2StyleGAN:How to embed images into the stylegan latent space
VAEs
Autoregressive
Background
- 3D数据稀缺
- 评估指标(考虑多视图一致性)
3D Generation Achievement
- 3D-GAN:Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling.
- DeepSDF:Learning continuous signed distance functions for shape representation.
- DMTet:Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis.
- EG3D:Efficient geometry-aware 3d generative adversarial networks
- DreamFusion:Text-to-3d using 2d diffusion.
- PointE:Point-e: A system for generating 3d point clouds from complex prompts.
- Zero-1-to-3:Zero-1-to-3: Zero-shot one image to 3d object
- Instant3D:Instant3d: Fast textto-3d with sparse-view generation and large reconstruction model.
- AutoSDF:Transformer + voxel grid
- EG3D:GAN + tri-plane
- SSDNeRF:diffusion + tri-plane
3D Generation Methods
Feedforward Generation
GAN
- point clouds:l-GAN/r-GAN,tree-GAN
- voxel grids:3D-GAN,Z-GAN
- meshes:MeshGAN
- SDF:SurfGen,SDFStyleGAN
Optimization-Based Generation
Procedural Generation
Generative Novel View Synthesis
Related Datasets
Optimization-Based Generation
Dream Field:
DreamFusion:
Make-it-3D (Image-to-3D)
Magic3D (Image/text-to-3D)
ProlificDreamer (text-to-3D)
Feedforward Generation
GANS:
3D GANS
tree-GAN (point cloud)
VAEs
NeRF-VAE
Autoregressive Models
PolyGen
Normalizing Flows
PointFlow
Diffusion Models
Meshdiffusion (mesh)
Lion (point cloud)
Point-E (point cloud)
Diffusion-SDF (SDF)
ShapE (Radiance Field)
Procedural Generation
create 3D models and textures from sets of rules
Generative Novel View Synthesis
GAN-based:
- PixelSynth
Diffusion-based:
- Zero-1-2-3
- Zero-1-2-3++
3D Generation Survey
https://jetthuang.top/所有/3D Generation Survey/