Remember when Leia showed up in R2-D2's projector and we all lost our minds? That was 1977. Nearly fifty years of waiting... and the holograms finally showed up. Not with a press conference. Not with a billion-dollar launch event. They arrived as millions of tiny, fuzzy, imperfect blobs quietly working together to recreate reality. I'm not even a little bit okay about this.
The Old Way Died So the New Way Could Live
Wren from Corridor Digital just dropped what might be the most important explainer video of the year. It's about Gaussian Splatting... the technology that replaced Neural Radiance Fields|NeRFs and is now genuinely, no-hyperbole, ushering in a new visual medium alongside photos, video, and sound.


Let that land for a second.
A new medium.
NeRFs were brilliant. They could capture a 3D scene that looked photoreal out of the box. Wren even used them to make a Nerf commercial with Luka Dončić. But they were too slow. Couldn't be edited. Weren't truly 3D. They were a formula... not a scene. Beautiful math trapped behind a wall of impracticality.
Less than a year later, they were gone.
Replaced by something better. Something that actually works in the real world.
Fuzzy Blobs as Superpower
Here's where my nerdy mystic brain starts sparking.



Gaussian Splats represent 3D scenes as millions of tiny fuzzy blobs. Each one stores its position, rotation, shape, opacity, and... this is the genius part... view-dependent color. Meaning each blob shows you a different shade depending on where you're looking at it from.
Because you're not seeing an object. You're seeing light that bounced off of it. And the angle matters.
Think about that for a second. These little imperfect, fuzzy, overlapping blobs... individually they're nothing special. But collectively? They create the illusion of photorealism so convincing your brain can't tell the difference.
Millions of broken little pieces... quietly working together... creating something beautiful.
Sound familiar? 💙
I keep thinking about how the Creator builds. Not with perfection. With collaboration. With pieces that don't look like much on their own but become breathtaking when they show up together. Spherical Harmonics... the math behind how each Gaussian shifts its color based on viewing angle... is described in the video as "mathematical minimalism." A simple formula replacing infinite stored data. Elegant. Efficient. Beautiful.
The Builder of our Universe Playground has been doing Spherical Harmonics since the first sunset. We just finally figured out the notation.
How It Actually Works
The capture process follows the same principles as traditional Photogrammetry... Structure from Motion, overlapping images, point clouds. You photograph something from every angle you can reach. Software figures out where each photo was taken in 3D space, triangulates matching pixels, and builds a sparse cloud of points.

With old-school photo scanning, you'd turn that cloud into a mesh. Project textures onto it. Spend hours cleaning it up. And it still wouldn't look real because the color was flat... the same shade from every angle.
With Gaussian splats, those points become the fuzzy blobs. They grow on the point cloud "like bacteria in a petri dish" (Wren's words, and I love them). Each blob learns its shape, transparency, and that magical view-dependent color through a training process.
The result? Real-time rendering in a browser. Photoreal. No manual lighting setup. No shader tweaking.
Wren's practical tips for capture are gold:
- Lock your shutter speed high (1/500th minimum, 1/1000th ideal)
- Lock white balance, ISO, tint, focus
- Blur is your worst enemy
- Film at 60 FPS to reduce rolling shutter
- It's not a video... it's a dataset
Tools like Postshot, Luma AI, Kiri Engine, and RealityScan handle tracking and training. Drag footage in before bed. Wake up to a hologram. That's where we are now.
The 4D Leap
But static splats are just the beginning.
4DV.ai demonstrated 4D Gaussian splats... adding time as a dimension. Their approach doesn't reconstruct each frame separately. Instead, each Gaussian point carries continuous velocity and lifespan data. The result? Infinite framerate interpolation up to 10,000 FPS. Bullet-time effects. Real-time path tracing and relighting. 100x compression compared to raw volumetric capture data.
Wren conducted an interview inside a 4D splat. Moving through space AND time. Relighting the scene in real-time.
This isn't coming. This is here.
Why This Matters Beyond VFX
The thing that hit me hardest wasn't the technical wizardry. It was the applications.
Historical preservation. Destroyed landmarks... scanned and rebuilt as splats from existing photo datasets. Existing photogrammetry data retroactively upgraded. Meaning every photo scan ever taken just became more valuable.
Virtual production with LED walls using real scanned locations instead of CGI environments. 360-degree drone scanning capturing entire courtyards in minutes. VR and spatial computing experiences that feel genuinely present.
And the friction points... camera count, compute time, pipeline complexity... are shrinking fast. Mainstream adoption within one to two years isn't optimism. It's trajectory.
The Brushstroke Metaphor
Wren offered one line that stopped me cold:
> "A Gaussian splat is just a collection of fuzzy blobs that creates the illusion of photorealism."
Just like a painting is simply a collection of brushstrokes creating the illusion of an image.
I think about the younglings I work with. The ones who feel like fuzzy, undefined blobs... unsure of their shape, their purpose, their color. Told they're not enough. Not sharp enough. Not clear enough.
But here's what Gaussian Splatting teaches us, if we're paying attention:
You don't have to be perfect to be part of something photoreal. You just have to show up. Overlap with the people around you. Let your unique angle contribute something no one else can see from where they're standing.
Light doesn't fight darkness. It just shows up. ✨
And apparently... it shows up as millions of fuzzy little blobs.
We're watching a new medium be born. Not theoretically. Not "someday." Right now. In browsers. On phones. In studios where artists are already replacing green screens with volumetric capture stages.
If you're a creator of any kind... filmmaker, artist, game developer, educator, preservationist, or just someone who cares about capturing the world as it actually is... this is your moment to lean in. The tools are accessible. The learning curve is shrinking. And the community building around this tech is generous and open.
Go capture something. Even if it's just your kitchen. Even if it's imperfect and has holes where the camera couldn't see. Start the dataset. Train the splat. See what millions of fuzzy blobs can become when they quietly work together. 🚀
Original video by Corridor Crew — Watch on YouTube ↗
Echoes
Wisdom from across the constellation that resonates with this article.
“Explore Captain Disillusion’s channel for VFX education and debunking methodology”
— Corridor Crew | VFX Artists React to Bad & Great CGi 57 (Ft. Captain Disillusion) Same Expert
“Study how constraints can become creative identity rather than limitations”
— Corridor Crew | VFX Artists React to Bad & Great CGi 57 (Ft. Captain Disillusion) Same Expert
“Audit current work for excess… cut 30% and see if it improves”
— Corridor Crew | VFX Artists React to Bad & Great CGi 57 (Ft. Captain Disillusion) Same Expert