MeshSplatting: Differentiable Rendering with Opaque Meshes
Abstract
MeshSplatting, a mesh-based reconstruction method, enhances novel view synthesis by optimizing geometry and appearance through differentiable rendering, improving quality and efficiency over existing techniques.
Primitive-based splatting methods like 3D Gaussian Splatting have revolutionized novel view synthesis with real-time rendering. However, their point-based representations remain incompatible with mesh-based pipelines that power AR/VR and game engines. We present MeshSplatting, a mesh-based reconstruction approach that jointly optimizes geometry and appearance through differentiable rendering. By enforcing connectivity via restricted Delaunay triangulation and refining surface consistency, MeshSplatting creates end-to-end smooth, visually high-quality meshes that render efficiently in real-time 3D engines. On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis, while training 2x faster and using 2x less memory, bridging neural rendering and interactive 3D graphics for seamless real-time scene interaction. The project page is available at https://meshsplatting.github.io/.
Community
MeshSplatting introduces a differentiable rendering approach that reconstructs connected, fully opaque triangle meshes for fast, memory efficient, high quality novel view synthesis.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inverse Rendering for High-Genus Surface Meshes from Multi-View Images (2025)
- Improving Multi-View Reconstruction via Texture-Guided Gaussian-Mesh Joint Optimization (2025)
- SparseSurf: Sparse-View 3D Gaussian Splatting for Surface Reconstruction (2025)
- Radiance Meshes for Volumetric Reconstruction (2025)
- LARM: A Large Articulated Object Reconstruction Model (2025)
- TagSplat: Topology-Aware Gaussian Splatting for Dynamic Mesh Modeling and Tracking (2025)
- RePose-NeRF: Robust Radiance Fields for Mesh Reconstruction under Noisy Camera Poses (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper