So let’s talk some more about the ramifications of the lightfield/plenoptic technology that I looked at in my last post. For Lytro, the marketing push (and the accompanying press) are all about the ability to re-focus the image after the photo’s been taken, as a post-process. And they show you these massive foreground-to-background refocuses. But that’s really just a parlor trick – how often are you suddenly going to decide that the photo you shot would be much better off if the background were in focus instead of the person standing in the foreground?
On the other hand, being able to do a very subtle refocus – for example to make sure that the eyes in a close-up portrait are perfectly sharp – that has real value to almost all photographers and there’s many a shot I’ve taken where I wish I could do exactly that!
But there’s actually a lot more to this technology than just refocusing. In reality what you’ve got here (or more specifically what can be computed here) is an additional piece of data for every pixel in an image – information about the depth or distance-from-camera.
So it’s not just the ability to refocus on a certain object in the image, it’s an overall control over the focus of every depth plane. The narrow-focus ‘Tilt-Shift’ effect becomes easy. You can even have multiple planes of focus. And macro photography is almost certainly going to see a big benefit as well.
While we’re at it, you’ll also have the ability to choose the exact characteristics of the out-of-focus areas – the bokeh. This would include the ability to create ‘stunt bokeh’ similar to what certain lensbaby products can produce (see here).
Oh, and it’s also pretty easy to generate a stereo image pair, if you’re into that sort of thing…
But wait, there’s more! Making use of depth information is something we do all the time in the visual effects world. Consider the image below.
Here’s the depth image for that scene, where brighter areas are (obviously) closer to camera.
In the same way that we can use this information to choose where to focus, we can apply other image adjustments to different depth areas. Want to introduce some atmosphere? Just color-correct the ‘distant’ pixels to be lower contrast.
From there it’s just a step away to compositing new elements into the scene – anything from a snowstorm to ravenous rampaging raptor robots.
Bottom line? Computational photography in its many forms is the future of photography. We’re almost certainly going to see the single-lens, single-sensor paradigm go away and we’re absolutely going to live in a world where more and more of the image-creation process occurs long after you’ve actually ‘taken’ the photograph. Personally, I’m looking forward to it!