NeRF For Larger Scenes
NSVF
Use a learnable feature (spatial embedding) for each voxel. They also adopts self-pruning (which eliminates unnecessary voxels which have low opacity) and progressive training (which splits each voxel into 8 sub voxels each iteration).
DeRF
Splitting scenes into voronoi cells and use individual smaller MLPs for each cell.
KiloNeRF
Splitting scenes into uniform voxels (grids) and use individual smaller MLPs for each voxel. The smaller MLPs are trained using a large NeRF as a teacher model to yield better performance. (This work also features faster inference speed)
NeRF For Faster Inference
PlenOctrees
An MLP predicting SH coefficients is used to store implicit radiance field in this work. The radiance data (in SH form) is densely sampled to construct an octree of radiance caching for faster rendering speed at inference.
NeRF For Faster Training
DONeRF
This work use depth information to supervise training, which leads to fewer sample count (4-8) when sampling near surfaces while maintaining similar render quality. When depth is unavailable during inference, a depth network is jointly trained to predict sampling position (i.e. depth) at inference time.