A lot of buzz lately on Lytro, the company who’s building a new camera based on lightfield (aka plenoptic) technology. In a nutshell, they’re building a camera that will allow you to refocus your photos after you’ve taken them. (Go poke around here if you want to see some examples of what is being discussed.)
And it is indeed some very cool technology (although I’m a little surprised that it’s taken this long to productize it given the fact that the basic concepts have been floating around for a long time now). (Okay, technically they’ve been floating around for a really, really long time now).
But I think it’s worth talking about how this fits into the workflow of a typical photographer. Because, although there’s been no shortage of people willing to label this as a ‘revolution’ in the way we do photography, I personally see this as more of a feature – one more thing that will be nice to have in your bag of tricks.
A useful trick, to be sure, but if you lined up all the features I’d like in a camera it would probably be somewhere in the middle of the pack. Somewhere (far) below a high resolution/high dynamic range sensor. Somewhere below good ergonomics. Maybe (if it’s well-implemented) I’d choose this over an extremely high burst-speed or an integrated Wi-Fi connection. Maybe.
But it’s the ‘well implemented’ part that worries me. Because it’s not a feature that would compel me to buy a particular camera above all others. There are always trade-offs when you’re creating a camera and the Lytro camera will be no exception. So unless they manage to execute extraordinarily well on all the other aspects of their camera, it could easily turn into something that’s just not all that compelling for most people. If I said there is a camera out there that can capture 100 megapixel images you might think that’s a very cool device, right? But then if you find out that it can only shoot at ISO 50 – that it produces absolutely unusable images unless you’re shooting in bright sunlight… Well, that camera is suddenly a lot less interesting.
So, in much the same way as what we’ve seen with the Foveon sensor – a cool technology that was never paired with a camera which was all that good in other areas – we’ll just have to wait and see what the eventual product from Lytro will look like. I’m optimistic – there’s some smart folks there – but it’s not a trivial challenge.
I think the best thing we can hope for is that this sort of feature doesn’t remain something that is only available from Lytro. Hopefully we’ll see similar capabilities coming from the other manufacturers as well. Yes, there are definitely patents in place on a lot of this but there are also a number of different ways to implement light-field image capture. For instance, you can (as I believe is the case with Lytro) use a lens that captures multiple viewpoints and sends them all to a single large sensor. Or, you can use multiple lenses each coupled with their own sensors. Either way you get an image (or set of images) that will require some post-processing to produce something that’s human-friendly.
Which is fine. Something I’ve been predicting for years (and have said so many times on the This Week in Photography podcast) is that the most common camera we’ll see in the future will be a computational camera. And personally I’m willing to bet that more often than not it will be composed of an array of smaller lenses and sensors rather than a single monolithic lens/sensor.
Why? Big sensors are expensive to manufacture, largely due to the fact that a single defect will require you to scrap the whole thing. Big glass is also expensive – it’s very difficult to create a large lens without introducing all sorts of optical aberrations. (Although smart post-processing will be increasingly important in removing those too).
So it’s clear to me that there’s a better and more cost-effective solution waiting to be developed that uses an overlapping array of inexpensive cameras. (Lytro started out this way, incidentally, via the Stanford Multi-Camera Array)
The camera in your phone, or the one stuck in the bezel of your laptop, costs only a few bucks when purchased in quantity. Now make yourself a 5×5 array of those things. If each one is 8 megapixels (easily available these days), that’s a lot of resolution to play with. No, you won’t be able to generate a single 200Megapixel image but you’ll still be able to get some serious high-rez imagery that also has a lot of additional benefits (see my next post for more on these benefits).
And yes, I’m completely ignoring a ton of engineering challenges here, but there’s nothing I’m aware of that feels like it can’t be solved within the next few years.
Bottom line: Don’t be at all surprised if the back of a future iPhone looks something like the image at the top of this post. I’m not gonna guarantee it, but I’m willing to take bets…
(Click here for Part 2 of this discussion)
A brilliant tech indeed. If implemented well, its use in compositing (see after effects, etc.) could be staggering.
It’s pretty cool technology and the little demos on their site are fun to play with but the thing that sticks in my mind is the question of output.
What do you do with these photos, how do you display them?
You couldn’t print these images properly, you could edit then print but what would you end up with?
The world’s coolest postage stamps?
I feel that everyone is so caught up in the ways to capture images that they forget why they are taking photographs and what they are taking photos for.