Gallery
Visualising Radiance-Based Representations
What does a Radiance-Based Representation look like?
Here is a video captured by Niko Suenderhauf at QUT at the 2024 Robotic Vision Summer School at ANU’s Kioloa Coastal Campus.
By extracting the frames, and passing them through a photogrammetry pipeline we can train a Neural Radiance Field (NeRF), learning a spatially varying field of density and colour to reconstruct the scene. We can then query any part of the scene to produce a new image, enabling us to explore the scene after the fact.
Importantly, the appearance is view-dependent, meaning reflections, transparency and other complex visual effects can be learnt and encoded by the representation.
More NeRFs around Complex Appearances
Docking Simulator
QUT Meeting Room
Courtesy of colleagues at QUT.