| dc.contributor.advisor | Sitzmann, Vincent | |
| dc.contributor.author | Charatan, David | |
| dc.date.accessioned | 2026-01-20T19:47:46Z | |
| dc.date.available | 2026-01-20T19:47:46Z | |
| dc.date.issued | 2025-09 | |
| dc.date.submitted | 2025-09-15T14:39:42.311Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164597 | |
| dc.description.abstract | Differentiable rendering has established itself as an effective tool for 3D reconstruction and novel view synthesis. Most state-of-the-art differentiable rendering methods use purpose-built renderers to optimize specialized, nonstandard 3D representations. However, most downstream applications of differentiable rendering rely on 3D meshes, which are near-universally supported due to their suitability for a wide range of rendering, simulation, and 3D modeling workflows. While prior methods have explored using 3D meshes directly within gradient-based optimization, they have been limited to object-centric scenes and cannot reconstruct real-world, unbounded scenes. This work addresses this shortcoming via a differentiable rendering formulation that combines an off-the-shelf, non-differentiable triangle rasterizer with a 3D representation that consists of nested mesh shells. During every forward pass, these shells are extracted from an underlying signed distance field. Then, the shells are independently rasterized and the resulting images are alpha-composited using opacities derived from the shells' per-vertex signed distance values. Notably, the shells' vertex positions are updated only via the underlying signed distance field, not via backpropagation through the rasterizer itself. This makes our method compatible with off-the-shelf, non-differentiable triangle rasterizers. To the best of our knowledge, our method is the first differentiable mesh rendering method that scales to unbounded, real-world 3D scenes, where it produces high-quality novel view synthesis results whose quality approaches the quality of state-of-the-art, non-mesh-based methods. Our method's performance is also competitive with state-of-the-art surface rendering methods on object-centric scenes. Ultimately, our method suggests that it may be possible to solve the differentiable rendering problem using tools from the conventional graphics toolbox rather than relying on specialized renderers. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Mesh Differentiable Rendering for Real-World Scenes | |
| dc.type | Thesis | |
| dc.description.degree | S.M. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| dc.identifier.orcid | https://orcid.org/0000-0002-1223-4475 | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Science in Electrical Engineering and Computer Science | |