Beyond Focus (Part 2)

So let’s talk some more about the ramifications of the lightfield/plenoptic technology that I looked at in my last post.  For Lytro, the marketing push (and the accompanying press) are all about the ability to re-focus the image after the photo’s been taken, as a post-process.   And they show you these massive foreground-to-background refocuses.  But that’s really just a parlor trick – how often are you suddenly going to decide that the photo you shot would be much better off if the background were in focus instead of the person standing in the foreground?

On the other hand, being able to do a very subtle refocus – for example to make sure that the eyes in a close-up portrait are perfectly sharp – that has real value to almost all photographers and there’s many a shot I’ve taken where I wish I could do exactly that!

But there’s actually a lot more to this technology than just refocusing. In reality what you’ve got here (or more specifically what can be computed here) is an additional piece of data for every pixel in an image – information about the depth or distance-from-camera.

So it’s not just the ability to refocus on a certain object in the image, it’s an overall control over the focus of every depth plane.  The narrow-focus ‘Tilt-Shift’ effect becomes easy.  You can even have multiple planes of focus.  And macro photography is almost certainly going to see a big benefit as well.

While we’re at it, you’ll also have the ability to choose the exact characteristics of the out-of-focus areas – the bokeh.  This would include the ability to create ‘stunt bokeh’ similar to what certain lensbaby  products can produce (see here).

Oh, and it’s also pretty easy to generate a stereo image pair, if you’re into that sort of thing…

But wait, there’s more!  Making use of depth information is something we do all the time in the visual effects world.  Consider the image below.

Here’s the depth image for that scene, where brighter areas are (obviously) closer to camera.

In the same way that we can use this information to choose where to focus, we can apply other image adjustments to different depth areas.  Want to introduce some atmosphere?  Just color-correct the ‘distant’ pixels to be lower contrast.

From there it’s just a step away to compositing new elements into the scene – anything from a snowstorm to ravenous rampaging raptor robots.

Bottom line?  Computational photography in its many forms is the future of photography.  We’re almost certainly going to see the single-lens, single-sensor paradigm go away and we’re absolutely going to live in a world where more and more of the image-creation process occurs long after you’ve actually ‘taken’ the photograph.   Personally, I’m looking forward to it!

Advertisements

X vs. Pro.

I’ve had a couple of people ask for my thoughts on the new FCPX release given my history with Apple and in particular my experience with how they dealt with another product that was focused (in our case almost exclusively) on professionals – the compositing software ‘Shake’.  So, even though I don’t think they’re totally analogous events, I figured I’d use it as an opportunity to make a couple of points about (my perception of) how Apple thinks.

For those that aren’t familiar with the history, Shake was very entrenched in the top end of the visual effects industry.  The overwhelming majority of our customers were doing big-budget feature film work and were, naturally, all about high-end functionality.

So after Apple acquired us there was a lot of concern that Cupertino wouldn’t be willing to continue to cater to that market and, although it took a few years, that concern did indeed come to pass.   The development team was gradually transitioned to working on other tools and Shake as a product was eventually end-of-life’d.

And back then the same questions were being asked as now – “Doesn’t Apple care about the high-end professional market?”

In a word, no.  Not really.  Not enough to focus on it as a primary business.

Let’s talk economics first.  There’s what, maybe 10,000 ‘high-end’ editors in the world?  That’s probably being generous.  But the number of people who would buy a powerful editing package that’s more cost-effective and easier to learn/use than anything else that’s out there?   More.  Lots more.  So, a $1000 high-end product vs. a $300 product for a market that’s at least an order of magnitude larger.   Clearly makes sense, even though I’d claim that the dollars involved are really just a drop in the bucket either way for Apple.

So what else?  I mean what’s the real value of a package that’s sold only to high-end guys?  Prestige?  Does Apple really need more of that?  Again, look back at Shake.  It was dominant in the visual effects world.  You’d be hard-pressed to pick a major motion picture from the early years of this century that didn’t make use of Shake in some fashion.  And believe me, Lord of the Rings looks a lot cooler on a corporate demo reel than does Cold Mountain or The Social Network.  Swords and Orcs and ShitBlowingUp, oh my.  But really, so what?

Apple isn’t about a few people in Hollywood having done something cool on a Mac (and then maybe allowing Apple to talk about it).  No, Apple is about thousands and thousands of people having done something cool on their own Mac and then wanting to tell everyone about it themselves.  It’s become a buzzword but I’ll use it anyway – viral marketing.

And really, from a company perspective high-end customers are a pain in the ass.  Before Apple bought Shake, customer feedback drove about 90% of the features we’d put into the product.  But that’s not how Apple rolls – for them a high end customers are high-bandwidth in terms of the attention they require relative to the revenue they return.  After the acquisition I remember sitting in a roomful of Hollywood VFX pros where Steve told everybody point-blank that we/Apple were going to focus on giving them powerful tools that were far more cost-effective than what they were accustomed to… but that the relationship between them and Apple wasn’t going to be something where they’d be driving product direction anymore.  Didn’t go over particularly well, incidentally, but I don’t think that concerned Steve overmuch… :-)

And the features that high end customers need are often very very unsexy.  They don’t look particularly good in a demo.  See, here’s the thing with how features happen at Apple to a great extent – product development is often driven by how well things can be demoed.  Maybe not explicitly – nobody ever told me to only design features that demoed well – but the nature of the organization effectively makes it work out that way.  Because a lot of decisions about product direction make their way very far up the management hierarchy (often to Steve himself).  And so the first question that comes up is ‘how are we going to show this feature within the company?’  All the mid-level managers know that they’re going to have a limited window of time to convey what makes a product or a feature special to their bosses.  So they either 1) make a sexy demo or 2) spend a lot of time trying to explain why some customer feels that some obscure feature is worth implementing.  Guess which strategy works best?

And by this I don’t mean to imply at all that the products are style over substance, because they’re definitely not.   But it’s very true that Apple would rather have products which do things that other products can’t do (or can’t do well), even if it means they leave out some more basic&boring features along the way.  Apple isn’t big on the quotidian.  In the case of FCP, they’d rather introduce a new and easier and arguably better method for dealing with cuts, or with scrubbing, or whatever, even if it means that they need to leave out something standard for high-end editors like proper support for OMF.  Or, to look all the way back to the iPod, they’d rather have a robust framework for buying and organizing music instead of supporting, say, an FM radio.  And it’s why Pages doesn’t have nearly the depth of Word but is soooo much more pleasant to use on a daily basis.

So if you’re really a professional you shouldn’t want to be reliant on software from a company like Apple.  Because your heart will be broken.  Because they’re not reliant on you.  Use Apple’s tools to take you as far as they can – they’re an incredible bargain in terms of price-performance.  But once you’re ready to move up to the next level, find yourself a software provider whose life-blood flows only as long as they keep their professional customers happy.  It only makes sense.

 

ADDENDUM.  I suppose I should make it clear (since some people are misinterpreting a few things) that I’m not complaining about Apple’s decisions with regards to either Shake or FCPX.  (As a stockholder I’ve got very little to complain about with regards to Apple’s decisions over the past several years :-))

And, in spite of the fact that MacRumors characterized this post as saying “Apple Doesn’t Care about Pro Market” I don’t believe at all that ‘professionals’ should immediately flee the platform.  As with anything, you should look at the feature set, look at the likely evolution, and make your own decisions.  My perception of the high-end professional category is informed by my history in feature-film production, which is a large, cooperative team environment with a whole lot of moving pieces.  Yours may be different.

Ultimately my goal was to shed some light on the thought-processes that go into Apple’s decisions, and the type of market they want to address.   Bottom line is that I do think that FCPX will provide incredible value for a huge number of people and will undoubtedly continue to grow in terms of the features that are added (or re-added) to it.   Just don’t expect everything that was in FCP7 to return to FCPX because they’re really different products addressing different markets.  It’s up to you to decide which market you work in.

Your camera, it lies!

If you’re at all tuned in to the world of digital photography you’re probably already aware of RAW files and why you should use them. But just in case you’re not, the quick answer is that RAW files allow you to preserve as much data as possible from what the sensor captured. This is in contrast to what happens when you shoot JPEG, where your camera makes some fairly arbitrary decisions about what it thinks the photo should look like, often throwing away a lot of detail in the process. Specifically it will throw away details in the high end of the image – the brightest parts.

But what a lot of people don’t realize is that even if you are shooting RAW, any review you do when looking at the display on the back of the camera is also not showing you any of that highlight detail. In other words, even if you’re shooting RAW your camera will only show what you’d get as if you were shooting JPEG.

Why is this an issue? Because it means that you may be tempted to underexpose the image in order to prevent, for example, your sky from ‘blowing out’. Even though in reality your RAW photo may have already captured the full tonal range of the sky.

Here’s a quick example – first of all, a photo that I took directly off the display on the back of my camera. (Yes, this is somewhat meta :-).  As you can see, the sky appears to be completely overexposed to white.

Here, I even made an animated GIF that shows you what the flashing overexposure warning on my camera looks like  (you may have to click on the image to see the animation).

Yup, the camera is telling me that, without a doubt, I’ve really overexposed this shot and have lost all that detail in the brightest part of the sky.  And if I had indeed been shooting jpeg this would be true.  I’d get home and view the file on my computer and I’d get an image that looks pretty much the same as what the display showed me, i.e. this:

But I wasn’t shooting JPEG, I was shooting RAW.  And if I take the RAW file that I actually shot and bring it into something like Aperture or Lightroom and work a little magic, you can see that there’s a whole bunch of beautiful blue sky in the allegedly overexposed area. Like this:

Tricky tricky.  Note that even the histogram display is misleading you – it, too, is showing you data relative to the JPEG file, not the underlying RAW file.

See how the right side of the histogram slams up against the wall?  This also indicates that you’ve clipped data.  Only you haven’t.

So beware, gentle readers.  Beware of the camera that lies.  Unfortunately there’s no good solution to this other than to develop some instincts about how much you should (or shouldn’t) trust your camera. It sure would be nice if manufacturers offered the option to display the image (and histogram) relative to the complete data captured in the RAW file but I’ve yet to find a camera that allows for this. Anybody seen one?

In the meantime, just be aware of the situation and plan your exposures accordingly!

Camera companies are (still) stupid

Seriously, why the hell, in this rather modern world we live in, aren’t cameras able to just copy files directly to an external hard drive without needing a computer in the middle?  It’s not that hard people…

</rant>

So with that in mind, here’s a challenge for the wonderful CHDK hacker community – can you develop this capability?  Bonus points for repurposing the essentially worthless ‘Direct Print’ button into a quick backup-to-external-disk control.  That would be sweet!

 

Shades of Gray

There are two types of people in the world – those that divide things into two categories and those that don’t. I’m one of those that don’t :-)

Okay, look… I get it. Our brains are pattern-finding devices and nothing’s easier than putting things into either/or categories. But not everything is easily categorizable as Right or Wrong, Good or Evil, Black or White… in fact virtually nothing is. It’s a mixed-up muddled-up shook-up world, right? So when I get into discussions with people who see nothing but absolutes, I tend to push back a bit. At least in my opinion, that sort of Dualistic worldview is not just lazy thinking, it can quickly grow dangerous if it prevents people from considering all aspects of a situation.

The particular conversation that sparked this blog post somehow turned to the Taijitu – the classic Yin/Yang symbol – which has for a lot of people has apparently come to embody this black-and-white worldview. Disregarding the fact that the Taijitu is a lot more nuanced that that, and is more about balance than absolutes, I decided to see if I could come up with something that more explicitly acknowledges the shades of gray that exist in the real world. A bit of imageprocessing mojo later, and I had this:

From a distance it maintains the general appearance and symmetry of the classic yin/yang ratio but up close we see the real story – the edges are ill-defined and chaotic, nowhere is the symmetry perfect, and most of it is composed of shades of gray.

I’m not sure what exactly to do with this now, but just in case anybody thinks it’s cool I’ve stuck a high-resolution version of it up on flickr with creative common licensing. Do with it what you will, and if someone wants to put it on a T-shirt or something, could you send me one? :-)

Get out of Bed!

Angel Falls, Venezuela

Most of my photography is done while traveling and thus I do a lot of landscape work.  And one of the hardest things about landscape photography (other than, sometimes, the actual process of getting to the destination itself) is the fact that you have so little control over the subject.  Things like the weather can have a huge impact, obviously – blue skies can’t be prearranged ahead of time.  But something that you can at least plan for, if not control, is the lighting.  Yes, every single day you’ve got that big light-source in the sky behaving in a very predictable fashion.  So use that knowledge!

An excellent case in point comes from my recent trip to Venezuela and a visit to Angel Falls.  I’ve already talked about this on the This Week in Photography podcast but because photo discussions often benefit from actual, um, photos, I wanted to go ahead and do a quick blog post about it.

Getting to the falls isn’t trivial .Start with a puddle-jumper plane from Caracas to a small town in the middle of nowhere that has no roads leading to it – everything comes in via plane.  Then several hours upriver in a very uncomfortable wooden canoe equipped with an outboard motor.  And then about an hour’s trek through the jungle.  But eventually we made it to a nice overlook of the falls, arriving sometime in the late afternoon.

And the sight was indeed spectacular and massive and awe-inspiring – the tallest waterfall in the world!   Took several photos of course and overall they were… fine.  Not particularly special other than the subject itself but certainly if that’s the best I’d gotten I would have been perfectly happy with it all.  Here’s one such shot from that afternoon:

angel falls, venezuela

As you can see, the sky is mostly overcast and thus the lighting was pretty flat but that’s just the way it works sometimes.  And so we hiked back to where we would spend the night – a tin-roofed, open-air structure with a bunch of hammocks that we could curl into as the evening’s thunderstorm rolled in.

But ah, it was the next morning when the magic happened.  I woke just around dawn – feeling a bit chilly and not particularly inclined to get out from the cozy blanket that I was wrapped in.  Still, I could see that the sky was reasonably clear and thus that morning sun might actually be doing something useful for me.

Everybody else in our group was still asleep so I moved as quietly as possible when pulling out my camera and pulling on my shoes and heading down the path to a decent lookout point I’d scouted out the night before.  And as I pushed through the foliage and got to the edge of the river where the view of the falls was unblocked, I could see that getting out of bed (or out of hammock, as the case may be) had been a very good decision indeed.  The morning light was hitting the side of the tepui at the perfect angle, the waterfall was highlighted almost as if it had a spotlight on it, and the clouds were interesting and well-placed.  I fired off a few different shots, playing with the framing, as more clouds started to move in.  Of all those shots, the one shown at the top of this post is the one that really nailed it for me. (click here if you want to get to a full-sized version)

So there you have it – the difference between mid-afternoon light and what you can get during the magic hours at dusk and dawn.   Timing, as they say, is everything.

As it turned out my morning timing was about as good as it could get.  Looking at the timestamp on the photos, the shot above was taken at 6:15:04 am.  This next shot was taken at 6:20:17.

angel falls in fog

If you look closely you can see that somewhere behind the cloud that has moved between me and the waterfall there is still a nice spot of light on the cliff-face but from this location that’s not doing me much good.  Quite a difference in a matter of only 5 minutes and 13 seconds!