Mirror Box

Bored?

Buy a box of 6 mirror tiles ( i.e. something like this )

Use some duct-tape and make a box out of them (mirror-side inwards)

(put a little duct-tape handle on the 6th tile to make it easy to take it on and off)

Set timer on camera, (turn flash on), place camera in box, put mirror ‘lid’ onto box before shutter fires…

…get cool photo.

toss in balls of aluminum foil – because they’re shiny.

Play with light-leaks at the seams of the box.

Add candles.  Pretty.

Finally, add Droids.  Everything is better with Droids.

(Click on any of the above images to biggerize them.  Also on Flickr, including a few more.)

 

UPDATE:  Here’s someone who got inspired about the idea and did a MirrorBox photo-a-day of throughout January 2012  http://flic.kr/s/aHsjxK8ZKe  Cool!

Advertisements

Photography Futures talk

Gave a talk last month at the Mindshare LA conference, chatting about where camera/photo technology is headed and the ramifications of that on our everyday lives.

Mindshare events typically feature a very eclectic program (the night I spoke there were also presentations on building a helicopter-motorcycle, permaculture, and how women can use pole-dancing for self-empowerment) so my talk was targeted to a diverse, nontechnical audience –  but I think I managed to pack a few interesting tidbits into the 15 minute window I had. Enjoy!

(And if anybody out there wants to fly me to random interesting locations on the planet to give the same talk, I’d be happy to do so :-))

Cloud sync of photos – what I REALLY want…

 

Just got done taking a quick look at today’s announcement of Adobe Carousel – their new cloud-based photo-syncing/sharing/editing app. Not sure I totally get the utility of it for me, personally – Dropbox gives me easy sync between all my devices already and I can’t imagine I’ll be doing a lot of image-editing on the fly. Your Mileage May Vary, of course.

But it got me to thinking about a piece of the storage/cloud issue that hasn’t really been well-addressed yet: The ability to sync some things locally but still have access to more things on an on-demand basis. In other words, I want to be able to have all my ‘favorite’ photos synched between all my devices – phones and laptops – and kept available as a local copy. (i.e. I want to be able to see them even if I have no network connection). But if I do have a network connection then everything else should be easily accessible as well.

Dropbox sort-of does this – they allow you to specify certain folders that aren’t synched to specific devices, but if you want to get to those non-synched folders you have to go into the clunky web interface. Wouldn’t it be nice if you could just see those other filenames as if they were local files and if you clicked on them then Dropbox would transparently run out and grab them as requested? Maybe even have a local cache where the most recent ones are kept around until something newer flushes them? In other words, it would seem as if I have all of my files local, but some of them would just take a little bit longer to access.

And going beyond this sort of ‘dumb’ file synching (where all files are treated the same way), what I’d really love to see is a storage/sync tool that understands that photos are different from other file-types, and that often-times it’s perfectly acceptable to have a lower-resolution (or lower quality) version of the photo available instead.  A 1 megapixel medium-quality jpeg version of an image is wayyyy smaller (like 99% smaller!) than a 16 megapixel original.

In other words, my ideal system would intelligently manage my photos so that everything is available at low resolution even if I have no network connection but if I am connected to the magical ‘cloud’ then it (transparently) will serve me the full-resolution image instead.

(The same thing conceivably applies to music as well of course. Let me keep a few GB of my favorite songs on my iPhone at full quality, but if I really want to listen to something obscure from the other 20GB of music I own then I can at least play a slightly degraded version of it.)

The key to all of this (and why it would ideally be something built into the operating system) is to make it as transparent as possible. Image viewing or sharing shouldn’t require me to keep track of whether I’m offline or not – they should just figure it out and do the best job they can at showing me the best quality version possible.  Simple :-)

I’m seeing a lot of ‘cloud’ services pop up – and they’re all great if you’ve got a 100% reliable network connection behind them. But people, I’ve got AT&T – a 100% reliable network is, unfortunately, not something I’m familiar with…

Long shutter helicopter lights

So I went ahead and bought one of those cool remote-controlled mini-helicopters the other day (http://amzn.to/osT7CM).  Gotta say – that’s a hellovalotta fun for 20 bucks.

And since it has these handy red/blue lights on the front of it that flash alternately, it seemed like a good idea to try some long-exposure photos whilst flying it around my living room last night.  Pretty neat (though you can definitely see how incredibly skill-less I currently am with the thing).  Still, so far the crashes haven’t been severe enough to do any damage to ‘copter, room, or my face, so I guess I’m ahead of the game.

 

Some abstract patterns with the camera pointed up at the ceiling

 

  

And yes, then I crashed it into the camera…

 

Zipping around the globe seemed like a good challenge.  At some point I’m hoping to be good enough to actually go, you know, AROUND the glob (instead of just bouncing up and down above it, barely avoiding collisions with continents or polar ice caps)

The battery was dying at this point so about all I could do was get it to skitter along the hardwood floor.  (A floor which desperately needs to be swept, apparently!)

Beyond Focus (Part 1)

Array Camera on iPhone

A lot of buzz lately on Lytro, the company who’s building a new camera based on lightfield (aka plenoptic) technology.  In a nutshell, they’re building a camera that will allow you to refocus your photos after you’ve taken them.  (Go poke around here if you want to see some examples of what is being discussed.)

And it is indeed some very cool technology (although I’m a little surprised that it’s taken this long to productize it given the fact that the basic concepts have been floating around for a long time now).  (Okay, technically they’ve been floating around for a really, really long time now).

But I think it’s worth talking about how this fits into the workflow of a typical photographer.   Because, although there’s been no shortage of people willing to label this as a ‘revolution’ in the way we do photography, I personally see this as more of a feature – one more thing that will be nice to have in your bag of tricks.

A useful trick, to be sure, but if you lined up all the features I’d like in a camera it would probably be somewhere in the middle of the pack.  Somewhere (far) below a high resolution/high dynamic range sensor.  Somewhere below good ergonomics.  Maybe (if it’s well-implemented) I’d choose this over an extremely high burst-speed or an integrated Wi-Fi connection.  Maybe.

But it’s the ‘well implemented’ part that worries me.  Because it’s not a feature that would compel me to buy a particular camera above all others.  There are always trade-offs when you’re creating a camera and the Lytro camera will be no exception.  So unless they manage to execute extraordinarily well on all the other aspects of their camera, it could easily turn into something that’s just not all that compelling for most people.  If I said there is a camera out there that can capture 100 megapixel images you might think that’s a very cool device, right?  But then if you find out that it can only shoot at ISO 50 – that it produces absolutely unusable images unless you’re shooting in bright sunlight… Well, that camera is suddenly a lot less interesting.

So, in much the same way as what we’ve seen with the Foveon sensor – a cool technology that was never paired with a camera which was all that good in other areas – we’ll just have to wait and see what the eventual product from Lytro will look like.  I’m optimistic – there’s some smart folks there – but it’s not a trivial challenge.

I think the best thing we can hope for is that this sort of feature doesn’t remain something that is only available from Lytro.   Hopefully we’ll see similar capabilities coming from the other manufacturers as well.  Yes, there are definitely patents in place on a lot of this but there are also a number of different ways to implement light-field image capture.  For instance, you can (as I believe is the case with Lytro) use a lens that captures multiple viewpoints and sends them all to a single large sensor.  Or, you can use multiple lenses each coupled with their own sensors.  Either way you get an image (or set of images) that will require some post-processing to produce something that’s human-friendly.

Which is fine.  Something I’ve been predicting for years (and have said so many times on the This Week in Photography podcast) is that the most common camera we’ll see in the future will be a computational camera.  And personally I’m willing to bet that more often than not it will be composed of an array of smaller lenses and sensors rather than a single monolithic lens/sensor.

Why?  Big sensors are expensive to manufacture, largely due to the fact that a single defect will require you to scrap the whole thing.  Big glass is also expensive – it’s very difficult to create a large lens without introducing all sorts of optical aberrations.  (Although smart post-processing will be increasingly important in removing those too).

So it’s clear to me that there’s a better and more cost-effective solution waiting to be developed that uses an overlapping array of inexpensive cameras.  (Lytro started out this way, incidentally, via the Stanford Multi-Camera Array)

The camera in your phone, or the one stuck in the bezel of your laptop, costs only a few bucks when purchased in quantity.   Now make yourself a 5×5 array of those things.  If each one is 8 megapixels (easily available these days), that’s a lot of resolution to play with.  No, you won’t be able to generate a single 200Megapixel image but you’ll still be able to get some serious high-rez imagery that also has a lot of additional benefits (see my next post for more on these benefits).

And yes, I’m completely ignoring a ton of engineering challenges here, but there’s nothing I’m aware of that feels like it can’t be solved within the next few years.

Bottom line:  Don’t be at all surprised if the back of a future iPhone looks something like the image at the top of this post.  I’m not gonna guarantee it, but I’m willing to take bets…

(Click here for Part 2 of this discussion)

Beyond Focus (Part 2)

So let’s talk some more about the ramifications of the lightfield/plenoptic technology that I looked at in my last post.  For Lytro, the marketing push (and the accompanying press) are all about the ability to re-focus the image after the photo’s been taken, as a post-process.   And they show you these massive foreground-to-background refocuses.  But that’s really just a parlor trick – how often are you suddenly going to decide that the photo you shot would be much better off if the background were in focus instead of the person standing in the foreground?

On the other hand, being able to do a very subtle refocus – for example to make sure that the eyes in a close-up portrait are perfectly sharp – that has real value to almost all photographers and there’s many a shot I’ve taken where I wish I could do exactly that!

But there’s actually a lot more to this technology than just refocusing. In reality what you’ve got here (or more specifically what can be computed here) is an additional piece of data for every pixel in an image – information about the depth or distance-from-camera.

So it’s not just the ability to refocus on a certain object in the image, it’s an overall control over the focus of every depth plane.  The narrow-focus ‘Tilt-Shift’ effect becomes easy.  You can even have multiple planes of focus.  And macro photography is almost certainly going to see a big benefit as well.

While we’re at it, you’ll also have the ability to choose the exact characteristics of the out-of-focus areas – the bokeh.  This would include the ability to create ‘stunt bokeh’ similar to what certain lensbaby  products can produce (see here).

Oh, and it’s also pretty easy to generate a stereo image pair, if you’re into that sort of thing…

But wait, there’s more!  Making use of depth information is something we do all the time in the visual effects world.  Consider the image below.

Here’s the depth image for that scene, where brighter areas are (obviously) closer to camera.

In the same way that we can use this information to choose where to focus, we can apply other image adjustments to different depth areas.  Want to introduce some atmosphere?  Just color-correct the ‘distant’ pixels to be lower contrast.

From there it’s just a step away to compositing new elements into the scene – anything from a snowstorm to ravenous rampaging raptor robots.

Bottom line?  Computational photography in its many forms is the future of photography.  We’re almost certainly going to see the single-lens, single-sensor paradigm go away and we’re absolutely going to live in a world where more and more of the image-creation process occurs long after you’ve actually ‘taken’ the photograph.   Personally, I’m looking forward to it!

X vs. Pro.

I’ve had a couple of people ask for my thoughts on the new FCPX release given my history with Apple and in particular my experience with how they dealt with another product that was focused (in our case almost exclusively) on professionals – the compositing software ‘Shake’.  So, even though I don’t think they’re totally analogous events, I figured I’d use it as an opportunity to make a couple of points about (my perception of) how Apple thinks.

For those that aren’t familiar with the history, Shake was very entrenched in the top end of the visual effects industry.  The overwhelming majority of our customers were doing big-budget feature film work and were, naturally, all about high-end functionality.

So after Apple acquired us there was a lot of concern that Cupertino wouldn’t be willing to continue to cater to that market and, although it took a few years, that concern did indeed come to pass.   The development team was gradually transitioned to working on other tools and Shake as a product was eventually end-of-life’d.

And back then the same questions were being asked as now – “Doesn’t Apple care about the high-end professional market?”

In a word, no.  Not really.  Not enough to focus on it as a primary business.

Let’s talk economics first.  There’s what, maybe 10,000 ‘high-end’ editors in the world?  That’s probably being generous.  But the number of people who would buy a powerful editing package that’s more cost-effective and easier to learn/use than anything else that’s out there?   More.  Lots more.  So, a $1000 high-end product vs. a $300 product for a market that’s at least an order of magnitude larger.   Clearly makes sense, even though I’d claim that the dollars involved are really just a drop in the bucket either way for Apple.

So what else?  I mean what’s the real value of a package that’s sold only to high-end guys?  Prestige?  Does Apple really need more of that?  Again, look back at Shake.  It was dominant in the visual effects world.  You’d be hard-pressed to pick a major motion picture from the early years of this century that didn’t make use of Shake in some fashion.  And believe me, Lord of the Rings looks a lot cooler on a corporate demo reel than does Cold Mountain or The Social Network.  Swords and Orcs and ShitBlowingUp, oh my.  But really, so what?

Apple isn’t about a few people in Hollywood having done something cool on a Mac (and then maybe allowing Apple to talk about it).  No, Apple is about thousands and thousands of people having done something cool on their own Mac and then wanting to tell everyone about it themselves.  It’s become a buzzword but I’ll use it anyway – viral marketing.

And really, from a company perspective high-end customers are a pain in the ass.  Before Apple bought Shake, customer feedback drove about 90% of the features we’d put into the product.  But that’s not how Apple rolls – for them a high end customers are high-bandwidth in terms of the attention they require relative to the revenue they return.  After the acquisition I remember sitting in a roomful of Hollywood VFX pros where Steve told everybody point-blank that we/Apple were going to focus on giving them powerful tools that were far more cost-effective than what they were accustomed to… but that the relationship between them and Apple wasn’t going to be something where they’d be driving product direction anymore.  Didn’t go over particularly well, incidentally, but I don’t think that concerned Steve overmuch… :-)

And the features that high end customers need are often very very unsexy.  They don’t look particularly good in a demo.  See, here’s the thing with how features happen at Apple to a great extent – product development is often driven by how well things can be demoed.  Maybe not explicitly – nobody ever told me to only design features that demoed well – but the nature of the organization effectively makes it work out that way.  Because a lot of decisions about product direction make their way very far up the management hierarchy (often to Steve himself).  And so the first question that comes up is ‘how are we going to show this feature within the company?’  All the mid-level managers know that they’re going to have a limited window of time to convey what makes a product or a feature special to their bosses.  So they either 1) make a sexy demo or 2) spend a lot of time trying to explain why some customer feels that some obscure feature is worth implementing.  Guess which strategy works best?

And by this I don’t mean to imply at all that the products are style over substance, because they’re definitely not.   But it’s very true that Apple would rather have products which do things that other products can’t do (or can’t do well), even if it means they leave out some more basic&boring features along the way.  Apple isn’t big on the quotidian.  In the case of FCP, they’d rather introduce a new and easier and arguably better method for dealing with cuts, or with scrubbing, or whatever, even if it means that they need to leave out something standard for high-end editors like proper support for OMF.  Or, to look all the way back to the iPod, they’d rather have a robust framework for buying and organizing music instead of supporting, say, an FM radio.  And it’s why Pages doesn’t have nearly the depth of Word but is soooo much more pleasant to use on a daily basis.

So if you’re really a professional you shouldn’t want to be reliant on software from a company like Apple.  Because your heart will be broken.  Because they’re not reliant on you.  Use Apple’s tools to take you as far as they can – they’re an incredible bargain in terms of price-performance.  But once you’re ready to move up to the next level, find yourself a software provider whose life-blood flows only as long as they keep their professional customers happy.  It only makes sense.

 

ADDENDUM.  I suppose I should make it clear (since some people are misinterpreting a few things) that I’m not complaining about Apple’s decisions with regards to either Shake or FCPX.  (As a stockholder I’ve got very little to complain about with regards to Apple’s decisions over the past several years :-))

And, in spite of the fact that MacRumors characterized this post as saying “Apple Doesn’t Care about Pro Market” I don’t believe at all that ‘professionals’ should immediately flee the platform.  As with anything, you should look at the feature set, look at the likely evolution, and make your own decisions.  My perception of the high-end professional category is informed by my history in feature-film production, which is a large, cooperative team environment with a whole lot of moving pieces.  Yours may be different.

Ultimately my goal was to shed some light on the thought-processes that go into Apple’s decisions, and the type of market they want to address.   Bottom line is that I do think that FCPX will provide incredible value for a huge number of people and will undoubtedly continue to grow in terms of the features that are added (or re-added) to it.   Just don’t expect everything that was in FCP7 to return to FCPX because they’re really different products addressing different markets.  It’s up to you to decide which market you work in.