RRM: Relightable assets using Radiance guided Material extraction
Abstract
Synthesizing NeRFs under arbitrary lighting has become a seminal problem in the last few years. Recent efforts tackle the problem via the extraction of physically-based parameters that can then be rendered under arbitrary lighting, but they are limited in the range of scenes they can handle, usually mishandling glossy scenes. We propose RRM, a method that can extract the materials, geometry, and environment lighting of a scene even in the presence of highly reflective objects. Our method consists of a physically-aware radiance field representation that informs physically-based parameters, and an expressive environment light structure based on a Laplacian Pyramid. We demonstrate that our contributions outperform the state-of-the-art on parameter retrieval tasks, leading to high-fidelity relighting and novel view synthesis on surfacic scenes.
Community
We present a new method for Relightable Radiance fields with an improved roughness representation, better diffuse/specular separation and a new environment map representation based on Laplacian Pyramids.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling (2024)
- MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling (2024)
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections (2024)
- IllumiNeRF: 3D Relighting without Inverse Rendering (2024)
- Relighting Scenes with Object Insertions in Neural Radiance Fields (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper