1. Introduction
- Nerf
- Models a scene with radiance field and density with weights of NN
- Use volume rendering to synthesize new views
- shortcomings
- Controlled setting에서만 잘 작동함
- lighting, content is all static due to short time
- performance degrades with moving object, variable illumination
- Assumes static world
- the density and radiance of the world is constant → any photos taken from same position, orientation must be identical
- Real world is not static
- same location → people move, sky, sun, weather etc
- same location, same time → exposure, color correction, camera etc
2. NeRF in the Wild
- NeRF’s assumption
- that a point in 3D space observed from the same position and viewing direction in two different images has the same intensity.
- Violation of Assumption
- Photometric variation
- illumination variance
- time of day, weather conditions etc
- Photographic imaging pipelines
- Transient Objects
- Never isolated
- cars, objects, pedistrians etc
- NeRF-w resolves these issues by
- Latent Appearance Modeling
- Transient Object Disentanglement

2-1. Latent Appearance Modeling
- Each image is assigned an appearance embedding vector by adopting GLO

- Guaranteed 3D geometry is static and shared across all images
- Freedom to vary emitted radiance of the scene in particular image

2-2. Transient Object Disentanglement

