Camera Viruses!!! OMG!!! (Fear-mongering)

viruscamera.jpg

Now that we’re finally starting to see some mainstream cameras that can run apps (see previous discussion here) I suppose it’s time to talk a little bit about one of the potential downsides of such a scenario.

Because as soon as you allow people to download and run third-party software, you’ve got to start thinking about what viruses might come along with those apps. (And I’m using ‘virus’ generically, including any sort of malicious code that might run without your knowledge on the camera).

The first thing that comes to mind are the privacy concerns – because if your camera also features network connectivity that means someone could surreptitiously redirect your photos to a remote server somewhere. Bad enough if you’re being monitored by a spouse or an employer, but there may be even more consequential cameras to infiltrate…

President_Barack_Obama_holding_a_canon_camera.jpg

(Keep in mind too that cameras are ALL going to have GPS chips embedded in them soon enough… instant tracking/surveillance device! Even if you’ve got the lens-cap on there’s probably a microphone on your camera too… one that can potentially be remotely activated and then monitored)

Most viruses are of course created for financial gain. Infect a remote computer and then use it as a server for sending out prescription-drug or junk-enhancement advertisements. What’s the photo equivalent?

If we consider the increasingly powerful processors that are coupled with cameras these days – processors that will eventually be able to run real-time photorealistic rendering algorithms – it’s not unreasonable to expect a virus that is capable of actually altering stills or videos while they’re still in camera.

True, that could just be a bit of fun – the so-called Walter White virus, which replaces everybody’s face with Brian Cranston’s:

wwhite_easter.jpg

(Been catching up on Breaking Bad lately…)

But (back to the financial incentives) what about a virus that keeps an eye your photos in order to identify branded objects – someone holding a can of Heineken, someone wearing Nikes, etc. Beyond just providing market research on your personal buying habits, a sophisticated enough algorithm could even REPLACE some of those logos with competing products. Not only does this hijack the unstated endorsement of a brand when you post one of those images to Facebook, it effectively is also altering your memories of the event. Your personal history purchased by the highest ethics-challenged bidder.

What else? Photo forensics (altering the date/time/location of a photo is easy enough, but it also makes it MUCH more difficult to identify photo-retouching if it occurs BEFORE an image is converted to a JPEG and stored to memory).

Financial transactions. My bank allows me to take a photo of a check in order to deposit it. What if that image gets altered before I send it to the bank…???

Security cameras. The movie trope of replacing security camera footage with alternate footage will be more and more of a concern.

Sporting events. There’s a lot of financial incentive to being able to affect the outcome of a major game. Hack the instant-replay camera – the one they go to when reviewing a play?
Any other ideas? What’s your best (or most outlandish) guess for the sort of camera viruses we might see out in the wild?

Advertisement

Using Faceship the Way it Wasn’t Intended

So we’ve got this fun little app called Faceship out there – www.appstore.com/faceship. It is, as the name would imply, very face-centric in the sense that it provides a bunch of effects that are intended to be applied to people’s faces. Stuff like this:

1nzxf5l08tw64wd9uioo.jpg   1p5gcy807uyd9cda3dw8.jpg 9szyqj08a1j5cda6zh4.jpg d6t2j907zx78gda75so.jpg

(These were all done by our users)

But naturally I (and a few other like-minded nonconformist sorts) can’t help but try these effects in other situations too. Here’s a few:

Lubek, Germany2013-02-17 14.43.40.jpg2013-02-17 13.49.14_Tweaked.jpg

While obviously you can use something like Photoshop to do far more sophisticated editing on an image once you get home, I’m finding that there’s definitely a joy to the immediacy of holding the camera up and seeing what the result will be in a live preview. And for that matter I almost always end up adjusting where I’m pointing the camera or where I’m standing in order to create a composition that is better targeted to the effect I’m using… something that I wouldn’t necessarily be able to do if this were purely a post-process and I’d already locked-in a specific framing.

At any rate, if you’re inclined, go download the app and give it a shot – I’d love to see some more out-of-the-box photos created with the thing. The basic app is free (OMG FREE!!!) which gets you several of the effects shown above and then you can do an in-app purchase (99¢ – find that under the couch cushions or something) to get all the currently-available effects.

And who knows, if we see enough interest in this sort of thing we might just do an app specifically dedicated to this sort of photography. Whaddya think?

Viral Videos, iPhone Apps… and Politicians with a Tiny Face

I’ve got a longer post coming about this but just wanted to let everybody know that we’ve got a new iPhone app out! It’s called ‘Faceship’ and for this version 1.0 release its one and only purpose in life is to give people Tiny Faces.

Why Tinyfaces? Well, I did this quick video of Mitt Romney with a TinyFace a little while ago: (Go ahead and give it a watch – it’s only a few seconds long).

Somehow it went super-viral, with over 1.75 million views so far. Which is, um, crazy.

So we figured we’d make an app for anybody to give themselves (or their friends) a tinyface. (Photos only – no video… yet)

The app’s called Faceship and it’s FREE. Not even any ads in it. Because we love you :-)

So I’d really really love it if you’d download it, give it a play-with, and most importantly, tell your friends and SHARE TINYFACE PHOTOS around the web. Apps live or die by word of mouth, so any help here would be super-appreciated. Seriously, thanks!

(Getting a nice review doesn’t hurt either – if you’re feeling generous please tell us what you think!)

(Oh, and if you don’t like the fact that the original video featured Romney, I did an Obama one too, and also a couple of other folks.  And a Donald Trump with a Bald Head.  Here’s our YouTube channel if you want a few more minutes of amusement – enjoy!)

Lifelogging – is it time?

I’ve done a bunch of talks lately, in places ranging from Hong Kong to Costa Rica, about the future of cameras and photography. (Here’s one) And one of the things I discuss is this concept of ‘Lifelogging‘ – the idea of wearing an always-on camera constantly, capturing everything you do as you go about your daily life.

At first glance I think a lot of people find the idea to fall somewhere along the line between obtrusive and completely boring. Who wants to record everything they do? Why? But the more I’ve thought about it the more I’m convinced that it’s going to become very very common. Because when it comes down to it, don’t you wish you could call up photos from all the key incidents in your life? Don’t you wish your memories were preserved beyond what the organic is capable of?

So when I saw yesterday’s announcement of a kickstarter for a tiny wearable camera (called ‘Memoto’) that will take a photo every half-minute, I wasn’t even slightly surprised. And after a few minutes of considering it I went ahead and ordered one (in spite of the rather hefty $200 pricetag.)

Is this going to be device that makes the concept mainstream? Doubt it, but it’s an interesting step towards that. I’m sure there will be stories of how it’s being misued, we’re going to hear about someone getting Punched In The Face because they wore it somewhere that’s not appropriate. (And by ‘appropriate’ I include just about anyplace where the photographee doesn’t want to be on camera).

But ultimately I think it’s inevitible that something like this will catch on with a lot of people. Biggest question for me, really, is whether or not it’s going to happen before or after the cameras get small enough to be undetectable by the people you’re interacting with on a daily basis.

For me personally I doubt I’ll wear it on a daily basis – mostly because That Would Be Weird. But as someone who loves to travel, I suspect the first trip I take after I get the device will see me clipping it on as soon as I head for the airport and not removing it again until I’m back home.

As for the device itself? I like a lot of what they’ve done. Small, rugged, decent battery life, and simple.  And they seem to have the expertise to make it happen.

The devil is in the details of course. How’s the low-light capabilities, for example? And It’ll be interesting to see what they decide on for the field of view. (Personally I suspect that the best way to go might be a very wide-angle lens – fisheye even – combined with some software that can rectilinearize it after the fact. With a 5 megapixel image you’ve got some leeway to do this, and ultimately it’s important to remember that for most people the value is going to be in capturing the moment as completely as possible, rather than creating photos that are suitable for wall-hanging).

And, as is to be expected, if you look at the comments on the kickstarter you’ve already got tons of people asking for specific features. But I like the decisions they’ve made so far in terms of keeping it small and simple and hopefully they’ll stay the course.

You can check out the Kickstarter here  (they’ve already reached their funding goal, less than 24 hours after going live) and once mine shows up I’ll definitely be talking about it some more.

 

…Because their Lips are Moving

One of the earliest posts I did on this blog related to new technologies in truth-detection and as the political season is heating up again I thought it would be worthwhile to revisit some of those points.  (Here’s the original post)

In particular, I’m interested in the variety of non-invasive technologies that are becoming available to tell whether or not a person is consciously lying.  To a greater or lesser extent, most normal (non-sociopathic) people have some sort of physical manifestation whenever they intentionally lie.  This can manifest as micro-expressions, as fluctions in pitch, frequency and intensity of the voice, and even bloodflow to the face (which can be detected by an infrared camera.)

Are these technologies 100% reliable as lie-detectors?  Not even close.  But they’re also not completely without merit and can, particularly if they’re used in conjunction with other techniques, be very effective.

More importantly, they’re only going to get better – probably lots better. And so even though we may not have the technology yet to accurately and consistently detect when someone is lying, we will eventually be able to look back at the video/audio that is being captured today and determine, after the fact, whether or not the speaker was being truthful.   A bit of retroactive truth-checking, if you will.

In other words, even though we may not be able to accurately analyze the data immediately, we can definitely start collecting it. Infrared cameras are readily available, and microexpressions (which may occur over a span of less than 1/25th of a second) should be something that even standard video (at 30fps) would be able to catch and of course we’ve got much higher-speed cameras than that these days. And today’s cameras should also have plenty of resolution to grab the details needed, particularly if you zoom in on the subject and frame the face only.

So it seems to me that someone needs to plan on recording all the Presidential (and vice-presidential) debates with a nice array of infrared, high speed, and high-definition cameras. And they need to do it in a public fashion so that every one of these candidates is very aware of it and of why it is being done.

Or am I just being naïve in thinking that the fear of eventually being identified as a liar would actually cause people (or even politicians) to modify their current behavior? Maybe, but it seems like it’s at least worth a shot.

Watching Movies on the iPhone 5

I’ll admit it… I have, on occasion, actually watched a movie or two on my iPhone.  Forgive me David Lynch

(Hell, I’ll even condemn myself to a deeper circle of hell by admitting that I actually watched one of the most glorious wide-screen movies ever – Lawrence of Arabia* – on an iPhone while flying to Jordan earlier this year.   Of course I’ve seen it more than a few times already, but still…)

So even though it’s far from my preferred viewing scenario, all this talk about the alleged size-change of the next-generation iPhone got me to thinking about how that extra real-estate would affect movie-watching on such a device.

The rumormongers all seem to have hit consensus that the next iPhone will keep the same width of 640 pixels, but will extend the height from the current 4S value of 960 up to 1137 pixels.  Do the math on that and you’ll find that the screen is about 18% bigger.  ( i.e. [640*1136]/[640*960] = 1.18333 times larger.)

But there’s more to the story when you actually sit down to watch a movie, because every movie has a specific aspect-ratio  that will be fit into the screen you’re viewing it on.  So if we’re looking at something like The Godfather on our iPhone 4S (assuming our digital file is in the correct 1.85:1 aspect ratio that the original movie was shot in), it will be scaled to be the maximum width of the display and then ‘letterboxed’ top and bottom with black.  Here’s an example of what this looks like (only instead of letterboxing with black bars I’ve made them a dark gray so you can see them better).

Now let’s look at the same movie on a taller (which, when we turn it sideways becomes wider) new device.

The aspect ratio of the movie stays the same but because the aspect ratio of the phone is much wider the image fits much better into the space we’ve got.  And thus the letterbox bars on top and bottom are much smaller.

Raiders of the Lost Ark?

Much nicer, eh?  In fact, even though the display is only 18% bigger, the fact that our widescreen movies fit so much nicer into the frame actually means that they’re a whopping 40% bigger.

Of course if you’re watching Gilligan’s island – a show that was shot in the typical television aspect ratio of 1:33:1 (i.e. 4:3) it doesn’t buy you anything because the other dimension is the limiting factor.

(Not to denigrate 4:3 by associating it only with Gilligan, by the way. Casablanca, Citizen Kane, Wizard of Oz… all 4:3)

But for the most part this new aspect ratio is a really nice perk for the movie-watcher.  Here’s one more example using Game of Thrones which was shot with the HDTV standard aspect ratio of 16:9.  This is almost exactly the aspect ratio of the alleged new iPhone and so it will now fit perfectly (i.e. without any letterboxing at all).

Personally, I’m looking forward to it.  Just don’t tell David Lynch.

(Caveat:  It’s worth noting that, depending on where you get your movies from, you might find different aspect ratios than what I’ve mentioned above.  Many movies are remastered to different aspect ratios depending on their intended release platform – DVD, iTunes, whatever.  If you’re watching a rip of a DVD from 1990, all bets are off)

Also, completely unrelated to this article, feel free to go grab a copy of my FreezePaint iPhone app, which is already tonz o’ fun and which will be at least 40% MORE FUN on the iPhone 5.  Guaranteed.

(*speaking of Lawrence of Arabia, while I was grabbing some images for this blogpost I came across this, which is completely unrelated to aspect ratios or iphones but was just too awesome not to post.  Click to make bigger.  Didn’t find it with any attribution so if anybody knows who did it I’ll be happy to credit them).

Apple Tablet Prototype

Image

Just saw this article on AppleInsider where a photo of one of the original apple tablets prototypes is shown. Quick story about my very brief interaction with this thing.

Although the article says that it may have been developed sometime between 2002 and 2004, I’m almost positive that the date I saw it would have been late 2004 at the earliest and more likely 2005.

They brought a few of us from the pro-apps group – UI- and Product-Designer types – into this little windowless room in Cupertino and there was a cardboard box sitting on the table. Once they were sure the door was locked and they made it very clear that everything we were going to see would not be discussed outside of this room, they lifted the box off to reveal something that looked very much like what’s shown in the photo above. Pretty much the same footprint as the then-current 12″ Aluminum PowerBook G4 but thinner and with the white polycarbonate case.

The reason we were being brought in to talk about it was because they wanted to get people coming up with a variety of multi-touch gestures that might be useful. One of the guys in our group (whom I won’t mention by name since he still works there) spent a bunch of time generating cool ideas that were then fed back into the secret machine.

It was only a few years later when the phone was announced that I realized this was, yet again, a bit of masterful misdirection by Apple – something they do even within the company. By this point they were almost certainly already working on the phone but rather than show us that they put the prototype tablet in front of us instead, thereby limiting the number of people who had the real story. (Being a part of pro-apps rather than one of the more core groups meant we were by definition a bit more on the periphery so this wasn’t a suprise.)

It also shows how they were already in the process of generating patents on multi-touch gestures. Interesting to note that this must have been right around the same time that Apple acquired Fingerworks.

 

CAMERA FEATURES I WANT (And that really really shouldn’t be all that hard to implement)

Since camera manufacturers continue to annoy me with their unwillingness to add the features I want (yes, I’m taking this personally) I guess I’m about due for another blogpost ranting/whining about it.  I’ve covered some of these things before in this blog (and quite often on the This Week in Photo podcast) but it seems worth having an up-to-date list of some obvious stuff.

Let’s set the ground rules first.  I’m not going to be talking about radical scientific breakthroughs here.  ISO 1billion at 100 Megapixels (capturing a dynamic range that includes both coronal activity on the sun and shadow details on a piece of coal… all in the same photo) will be cool, definitely.  But there’s some fundamental physics that still need to be dealt with before we get there.  No, all I’m asking for today are engineering – often software – features.  Things that really only require desire/resources and design/integration.

At this point it’s no secret that more people take photos with their phones than with dedicated cameras.  Obviously the main reason for this is because The Best Camera Is The One That’s With You, coupled with the paradigm-shift that comes with network connectivity to friends and social networks.  As an iPhone developer I know how crazy it would be to create a fun app – even one as fun as FreezePaint – without including the ability to share the images to Facebook/Twitter/Flickr/Instagram once they’ve been taken.

But beyond all that, what else does a ‘camera’ like the iPhone give us?  Tons, really, and the bottom line is that you can do photo things with your phone that no ‘traditional’ camera can touch.  (And this in spite of the fact that the iOS development environment is woefully poor at giving deep-down access to the camera hardware).  Because camera phones are an open platform, unbounded by a single company’s vision for what a camera can or should do.

So let’s postulate some basics – a standalone camera with a good sensor, a nice array of lens options, and a reasonably open operating system under the hood that allows full programmatic access to as much of the hardware as possible.  What can we come up with?   (And a caveat before we continue:  Yes I know that there are some cameras that can already do some of the things I’m going to mention.  Some of the things.  Sort of.  We’ll talk about that too.)

First, a few more basic wishlist items on the hardware side of things.

— Radios. Access to the same data network that my phone can get, and also shorter-range communication via bluetooth, WiFi. (And while you’re at it, couldn’t you just stick something in there that grabs the local time via radio signal and sets my camera’s time automatically?  I’ve got a $20 watch that can do this…)

With network and remote communication in place, all sorts of things happen.  The ‘sharing’ stuff mentioned above of course, but also things like remote control of the focus and shutter via my phone (because the phone is getting a live video feed directly from the camera’s sensor).

Seriously, take a look at this little beauty of an app for controlling an Olympus E-M5 via iphone.  I love the features it supports. Timelapses with accelleration. Take timelapse photos based on distance moved (i.e. every 50 feet or something). Sound-detection to trigger shutter. Etc.  Only bummer is that it requires extra hardware on both phone and camera to communicate.

What else does having a networked camera give?  Hopefully I can buy flash devices that talk nicely too, via some standard transport like bluetooth or WiFi.  Radio Poppers and Pocket Wizards are great but these are professional-level tools that are way overkill (and overpriced) for what I need most of the time.  I just want to hold my camera in one hand and a flash unit (no bigger than a deck of cards, hopefully) in the other hand, extended off to the side to give a pleasing angle on the subject.

(Brief tangent on lights:  For a lot of closer-up work (not to mention video), it sure feels like continuous-light LED sources are starting to be a very viable alternative to xenon flash devices.  These too need to get remote-control friendly – take a look at the awesome kickstarter-funded ‘Kick’.  Sure would be cool, too, if I could buy a light/flash that could take the same batteries as my camera…but I’m getting way off-topic now).

— Sensors.  In this situation the word ‘sensor’ refers to all the auxiliary data that you may want to collect, i.e. accelerometers and gyroscopes and altimeters and compasses and GPS units.  Phone’s already got ‘em – put those in my camera too, please.  If you add it (and allow programmatic access to it), all sorts of cool things will start showing up, just as they have on phones.

For example, use information from the accelerometer to measure the camera’s movement and snap the photo when it’s most stable.  These accelerometers are sensitive enough that they can probably measure your heartbeat and fire the shutter between beats.

(It’s also worth noting that the main difficulty with algorithmically removing motion blur caused by camera shake, usually via deconvolution, is because of the uncertainty about exactly how the camera moved while the photo was being taken.  If we were to record accurate accelerometer readings over the duration of the time that the shutter was open and then store that in the EXIF data, it suddenly becomes much easier to remove camera-shake blur as a post-process).

Combine GPS with network connectivity so my camera’s positional data can be at least as accurate as my phone (i.e. A-GPS).  Also toss in a compass.

(Incidentally, the desire to have accurate location information isn’t just about remembering where you were when you took a photo.  There’s the much cooler technology coming down the pike that will allow for re-creating the actual geometry of the place where you’re standing if you’ve got enough views of the scene.  Take a look at Microsoft’s photosynth, Autodesk’s 123D.   And thus the more people who are posting geotagged information on the world around them, the more we’ll be able to better characterize that world.)

Yeah, there will be a battery-drain penalty if I have all these sensors and radios operating constantly.  Big deal.  Let’s see how much a Canon 5DmkIII battery goes for on ebay these days… Oh look – about TEN DOLLARS.  For that amount I think I can manage to have a few spares on hand.

— Lots of buttons and dials, sensibly placed.  More importantly – allow me to reprogram the buttons and dials in any way I see fit.  Any of the buttons/dials, not just a couple of special-case ones.  (And touchscreens don’t eliminate the need for this.  Too fiddly when you’re trying to concentrate on just getting the shot.)  If you’re really sexy you’ll give me tiny e-ink displays for the button labels so I can reprogram the labels too…

— USB support.  Not just as a dumb device I can plug into a computer, but as a host computer itself.  Like I talked about here, it sure would be nice if my camera had host-device USB capabilities so I could just plug an external drive into it and offload all my photos onto some redundant storage while I’m traveling.  Without needing to carry a laptop with me as well.  (Or having to purchase some overpriced custom card reader with built-in storage.)

And… incidentally… how about letting me charge the battery in the camera by plugging the camera into a USB port.  I understood that there may be power issues (someone out there want to figure out the tradeoff for charge times?) but just give me the choice to leave the bigassed battery-charging cradles at home.  (Or, more specifically, to not be F’d when I accidentally forget it at home!)

(In terms of wired connectivity there’s also Thunderbolt to consider but it smells like it’s a few more years out.)

Finally, for the super-geeks, it sure would be cool to get access to the hardware at a really deep level.  I’m talking stuff like allowing me to play with the scanning rate of the rolling shutter, for instance, if I want to make some funky super-jellocam images.

Okay, let’s dive into software-only stuff.  Here are a few things that would be almost trivial to implement if I had even basic programmatic access to my camera’s hardware.

— Allow me to name files sensibly and according to whatever scheme I want.  Right now I rename my images as soon as they’re loaded onto my computer (based on the day/time the photo was taken) but why can’t my camera just do that for me automatically?  And do it according to any scheme I care to specify.

— Timelapse/Long Exposure.  This is such a simple feature from a software perspective yet many (probably most) cameras still don’t support it.  For my DSLR I need to buy an external trigger with timer capabilities.  Why?  (Other than the obvious answer, which is because it allows the camera manufacturers to charge a ridiculous amount of money for such a remote trigger.  Hint – buy an ebay knockoff instead).

— Motion detection.  It’s a easy enough algorithm to detect movement and only trigger a photo when that occurs.  Yes, this would make it easy to set up a camera to see if the babysitter or the house-painters wander into rooms they shouldn’t, but it would also be very interesting to use with, for example, wildlife photography.

— Let me play with the shutter/flash timing.  Allow me to do multiple-flashes over the duration of a long exposure, for instance, to get interesting multiple-exposure effects.

— Give me programmatic control over the autofocus and the zoom (if it’s servo-controlled), so I can shoot bracketed focus-points or animate the zoom while the shutter is open for interesting effects.  I mean Lytro [link] is cool and all, but most of the time I don’t want to radically refocus my image, I just want to tweak a slightly missed shot where autofocus grabbed the bridge of the nose instead of the eyes.  If my camera had automatically shot 3 exposures with focus bracketing I’d be able to pick the best, but also I’d be able to use some simple image manipulation to stack them and increase the DOF.

(Brief sidetrack here.  Yes, some of the coolcool stuff will require fairly sophisticated algorithms to accomplish properly in a lot of situations.  So what?  There are, in fact, a lot of sophisticated algorithms out there.  There’s also a lot of stuff that can be done without sophisticated algorithms, just by using elbow grease and a few hours in Photoshop.  Give me the raw material to work with and I’ll figure out the rest!)

— Activate the microphone to do a voice annotation or capture some ambient sound with every photo.  Or for that matter, allow me to voice-activate the shutter.  ( “Siri, take a photo please.”)

Actually, let’s talk about that a bit more.  Voice-control for taking a photo may not be all that exciting, although I can certainly see situations where I’m trying to hold my camera in some strange position in order to get a particular point of view and the shutter-button has become difficult to reach easily.  (Example?  Camera on a tripod that I’m holding above my head to get a very high angle, with LCD viewfinder angled downward so I can see the framing.)   Where’s my voice-activated Digital Camera Assistant when I need her?

But beyond that, think about all the camera settings that you normally have to dig through a menu to find.  Custom-programmable buttons & dials are great, but there’s always going to be a limited number of them.  Being able to quickly tell the camera to adjust the ISO or turn on motor-drive (AKA burst or continuous mode) just might make the difference between getting the shot and not.

Finally, there’s a whole huge variety of general image-processing operations that could be applied in-camera, from custom sharpening algorithms to specialized color-corrections to just about anything else you currently need to do on your computer instead.  HDR image creation.  Panoramic stitching.  Tourist Removal.  Multiple-exposures to generate super-resolution images.  Etc., etc.  Are some of these better done as post-processes rather than in-camera?  Sure, probably.  But you could make the same claim about a lot of things – it’s why some people shoot RAW and others are happy with JPEG. Bottom line is that there are times where you just want to get a final photo straight out of the camera.

Having said that, let’s talk about the proper way to do some of these fancy things like creating HDR imagery.  There are already cameras that can, for example, auto-bracket and then create HDR images.  Sort of.  More specifically what they’re doing is internally creating an HDR image, then tone-mapping that wide dynamic range down to an image that looks ‘nice’ on a normal monitor, and then saving that out as a jpeg.  (‘Nice’, incidentally, being an arbitrary mapping that some dude in a cubicle came up with.  Or maybe a group of dudes in a conference room.  Whatever… it’s still their judgement on how to tone-map an image, not mine.)

Better thing to do?  Shoot an exposure-bracket of the scene and combine them (after auto-aligning as necessary) into a true high dynamic range image.  Save that as a high bit-depth TIFF file or DNG or (better) OpenEXR or something.  You can give me the jpeg too if you want, but don’t throw away all the useful data.  In other words, let me work with HDR images in the same way I work with RAW images because that’s exactly what a RAW file is… a somewhat limited high dynamic range file.

This mentality is the sort of thing I’d like to see for all sorts of in-camera software algorithms – give me something useful but don’t throw away data.

I could probably list dozens of additional image-processing algorithms that it would be cool to have in my camera (not to mention video-related tools).   Some of the features mentioned above may not feel terribly important or may even seem ‘gimmicky’, but all it takes is a single special-case situation where you need one of them and you’ll be glad they’re part of your software arsenal.

So who’s going to make a camera like this this?  Who’s going to make the hardware and who’s going to make the software/OS?  In terms of hardware I’m betting it won’t be Canon or Nikon – they’re too successful with their existing systems to create something as open as what I’m describing.  Probably not Sony either – they’ve always relied on proprietary, closed hardware too (look how long it took for them to realize that people were avoiding their cameras simply because they didn’t want to be tied to the Sony-only ‘memory stick’).

I guess I’m hoping that someone like Panasonic or Olympus steps up to the plate, mostly because I love the Micro 4/3 lens selection.  But maybe it will be a dark-horse 3rd party – a company that sees the opportunity here and is willing to invest in order to break into the digital camera marketplace with a huge splash.  Might even be someone like Nokia – does the recent 41 megapixel camera phone indicate a potential pivot from phone manufacturer to camera manufacturer?

In terms of software I know I’d love to see a full-featured camera running iOS, but realistically I suspect that a much more likely scenario is that we’ll get a camera running a version of Google’s Android OS.   Or hell, maybe Microsoft will step up to the plate here.  Crazier things have happened.

But bottom line is that it sure feels like someone is going to do these things, and hopefully soon.  And then the real fun will start, and we’ll see features and capabilities that nobody has thought of yet.  So you tell me – If you had full software control over your camera what would you do with it?

Announcing… FreezePaint

Image

Yeah I know, things have been a little quiet here on the blog lately. I’ve been head-down on a couple of projects that became rather all-consuming. But the good news is that one of them has finally come to fruition and so let’s talk about it.

Because, hey, I’ve just released my first app into the iOS App store!

It’s called FreezePaint and it’s pretty fun, if I do say so myself.

The website for the app is at http://www.freezepaintapp.com and that’ll give you a rough idea of what it’s all about*.  But if I had to do a quick summary description about it, I’d say it’s a sort of live on-the-fly photo-montaging compositing remixer thingy.  Which probably makes it sound more complicated than it is.  Here, watch the video:

Of course as anybody who’s done an app can tell you, getting it launched is only the beginning of the process.  Time to put on my sales&marketing hat.

I’ve had some really great advice from some really great people (thanks, really great people!) and one of the things I heard several times was that it’s extremely important to get that initial sales bump to be as large as possible.

So anybody that’s reading this who has 99cents in their pocket and is feeling either curious or charitable, I’d be HUGELY APPRECIATIVE if you could go and buy a copy of FreezePaint. More particularly, I’d be extra hugely appreciative if you’d go buy it, like, now. Heck I’ll even offer a money-back guarantee. If you really don’t like it – don’t think you’ve gotten 99cents worth of amusement out of it, then I’ll happily Paypal a buck back to you. Simple as that.

But beyond just buying it, I’m hoping you can help spread the word a little bit. Because that’s where the real traction will come from. Fortunately we’ve made it pretty darn easy to do that because once you set things up it’s just a single button-click to share via Twitter, Facebook, Flickr or email. (Or all four at the same time!)

(And if you like it, remember that good reviews are an app’s life-blood… here, let me make it easy by providing a link to where you can leave one…  Just click here and scroll down a little bit.)

I know this all sounds rather self-serving. It totally is! I want people to use FreezePaint. I want to see what sort of wacky images you can come up with!  At this point I’m not even sure what the most common usage is going to be!  Will people spend most of their time putting their friends’ faces on their dogs?  Or will it be more of a ‘scrapbooking’ thing – a quick way to collage together a fun event or location.  Beats me.  A few of the images that beta-testers have created are at http://favorites.freezepaintapp.com – send me something good/fun/interesting/disturbing that you’ve done with FreezePaint and there’s a pretty good chance it’ll end up there too!  (There’s a button at the bottom of the ‘sharing’ page in the app that shoots your images straight to my inbox.)

I’m sure I’ll be doing several more posts about all of this – about the process of how it all came together, about the pains of trying to find the right developer to work with (which I did, finally!), about the fun of dealing with the app store submission process, etc… :-)

But for now I’m just hoping that people check it out, tell their friends, and mostly have fun!

*Also, someone remind me to write a blog at some point about what an awesome tool http://www.unbounce.com is for building product websites like this – save me tons of time.

Mirror Box

Bored?

Buy a box of 6 mirror tiles ( i.e. something like this )

Use some duct-tape and make a box out of them (mirror-side inwards)

(put a little duct-tape handle on the 6th tile to make it easy to take it on and off)

Set timer on camera, (turn flash on), place camera in box, put mirror ‘lid’ onto box before shutter fires…

…get cool photo.

toss in balls of aluminum foil – because they’re shiny.

Play with light-leaks at the seams of the box.

Add candles.  Pretty.

Finally, add Droids.  Everything is better with Droids.

(Click on any of the above images to biggerize them.  Also on Flickr, including a few more.)

 

UPDATE:  Here’s someone who got inspired about the idea and did a MirrorBox photo-a-day of throughout January 2012  http://flic.kr/s/aHsjxK8ZKe  Cool!

Photography Futures talk

Gave a talk last month at the Mindshare LA conference, chatting about where camera/photo technology is headed and the ramifications of that on our everyday lives.

Mindshare events typically feature a very eclectic program (the night I spoke there were also presentations on building a helicopter-motorcycle, permaculture, and how women can use pole-dancing for self-empowerment) so my talk was targeted to a diverse, nontechnical audience –  but I think I managed to pack a few interesting tidbits into the 15 minute window I had. Enjoy!

(And if anybody out there wants to fly me to random interesting locations on the planet to give the same talk, I’d be happy to do so :-))

Cloud sync of photos – what I REALLY want…

 

Just got done taking a quick look at today’s announcement of Adobe Carousel – their new cloud-based photo-syncing/sharing/editing app. Not sure I totally get the utility of it for me, personally – Dropbox gives me easy sync between all my devices already and I can’t imagine I’ll be doing a lot of image-editing on the fly. Your Mileage May Vary, of course.

But it got me to thinking about a piece of the storage/cloud issue that hasn’t really been well-addressed yet: The ability to sync some things locally but still have access to more things on an on-demand basis. In other words, I want to be able to have all my ‘favorite’ photos synched between all my devices – phones and laptops – and kept available as a local copy. (i.e. I want to be able to see them even if I have no network connection). But if I do have a network connection then everything else should be easily accessible as well.

Dropbox sort-of does this – they allow you to specify certain folders that aren’t synched to specific devices, but if you want to get to those non-synched folders you have to go into the clunky web interface. Wouldn’t it be nice if you could just see those other filenames as if they were local files and if you clicked on them then Dropbox would transparently run out and grab them as requested? Maybe even have a local cache where the most recent ones are kept around until something newer flushes them? In other words, it would seem as if I have all of my files local, but some of them would just take a little bit longer to access.

And going beyond this sort of ‘dumb’ file synching (where all files are treated the same way), what I’d really love to see is a storage/sync tool that understands that photos are different from other file-types, and that often-times it’s perfectly acceptable to have a lower-resolution (or lower quality) version of the photo available instead.  A 1 megapixel medium-quality jpeg version of an image is wayyyy smaller (like 99% smaller!) than a 16 megapixel original.

In other words, my ideal system would intelligently manage my photos so that everything is available at low resolution even if I have no network connection but if I am connected to the magical ‘cloud’ then it (transparently) will serve me the full-resolution image instead.

(The same thing conceivably applies to music as well of course. Let me keep a few GB of my favorite songs on my iPhone at full quality, but if I really want to listen to something obscure from the other 20GB of music I own then I can at least play a slightly degraded version of it.)

The key to all of this (and why it would ideally be something built into the operating system) is to make it as transparent as possible. Image viewing or sharing shouldn’t require me to keep track of whether I’m offline or not – they should just figure it out and do the best job they can at showing me the best quality version possible.  Simple :-)

I’m seeing a lot of ‘cloud’ services pop up – and they’re all great if you’ve got a 100% reliable network connection behind them. But people, I’ve got AT&T – a 100% reliable network is, unfortunately, not something I’m familiar with…

Long shutter helicopter lights

So I went ahead and bought one of those cool remote-controlled mini-helicopters the other day (http://amzn.to/osT7CM).  Gotta say – that’s a hellovalotta fun for 20 bucks.

And since it has these handy red/blue lights on the front of it that flash alternately, it seemed like a good idea to try some long-exposure photos whilst flying it around my living room last night.  Pretty neat (though you can definitely see how incredibly skill-less I currently am with the thing).  Still, so far the crashes haven’t been severe enough to do any damage to ‘copter, room, or my face, so I guess I’m ahead of the game.

 

Some abstract patterns with the camera pointed up at the ceiling

 

  

And yes, then I crashed it into the camera…

 

Zipping around the globe seemed like a good challenge.  At some point I’m hoping to be good enough to actually go, you know, AROUND the glob (instead of just bouncing up and down above it, barely avoiding collisions with continents or polar ice caps)

The battery was dying at this point so about all I could do was get it to skitter along the hardwood floor.  (A floor which desperately needs to be swept, apparently!)

Beyond Focus (Part 1)

Array Camera on iPhone

A lot of buzz lately on Lytro, the company who’s building a new camera based on lightfield (aka plenoptic) technology.  In a nutshell, they’re building a camera that will allow you to refocus your photos after you’ve taken them.  (Go poke around here if you want to see some examples of what is being discussed.)

And it is indeed some very cool technology (although I’m a little surprised that it’s taken this long to productize it given the fact that the basic concepts have been floating around for a long time now).  (Okay, technically they’ve been floating around for a really, really long time now).

But I think it’s worth talking about how this fits into the workflow of a typical photographer.   Because, although there’s been no shortage of people willing to label this as a ‘revolution’ in the way we do photography, I personally see this as more of a feature – one more thing that will be nice to have in your bag of tricks.

A useful trick, to be sure, but if you lined up all the features I’d like in a camera it would probably be somewhere in the middle of the pack.  Somewhere (far) below a high resolution/high dynamic range sensor.  Somewhere below good ergonomics.  Maybe (if it’s well-implemented) I’d choose this over an extremely high burst-speed or an integrated Wi-Fi connection.  Maybe.

But it’s the ‘well implemented’ part that worries me.  Because it’s not a feature that would compel me to buy a particular camera above all others.  There are always trade-offs when you’re creating a camera and the Lytro camera will be no exception.  So unless they manage to execute extraordinarily well on all the other aspects of their camera, it could easily turn into something that’s just not all that compelling for most people.  If I said there is a camera out there that can capture 100 megapixel images you might think that’s a very cool device, right?  But then if you find out that it can only shoot at ISO 50 – that it produces absolutely unusable images unless you’re shooting in bright sunlight… Well, that camera is suddenly a lot less interesting.

So, in much the same way as what we’ve seen with the Foveon sensor – a cool technology that was never paired with a camera which was all that good in other areas – we’ll just have to wait and see what the eventual product from Lytro will look like.  I’m optimistic – there’s some smart folks there – but it’s not a trivial challenge.

I think the best thing we can hope for is that this sort of feature doesn’t remain something that is only available from Lytro.   Hopefully we’ll see similar capabilities coming from the other manufacturers as well.  Yes, there are definitely patents in place on a lot of this but there are also a number of different ways to implement light-field image capture.  For instance, you can (as I believe is the case with Lytro) use a lens that captures multiple viewpoints and sends them all to a single large sensor.  Or, you can use multiple lenses each coupled with their own sensors.  Either way you get an image (or set of images) that will require some post-processing to produce something that’s human-friendly.

Which is fine.  Something I’ve been predicting for years (and have said so many times on the This Week in Photography podcast) is that the most common camera we’ll see in the future will be a computational camera.  And personally I’m willing to bet that more often than not it will be composed of an array of smaller lenses and sensors rather than a single monolithic lens/sensor.

Why?  Big sensors are expensive to manufacture, largely due to the fact that a single defect will require you to scrap the whole thing.  Big glass is also expensive – it’s very difficult to create a large lens without introducing all sorts of optical aberrations.  (Although smart post-processing will be increasingly important in removing those too).

So it’s clear to me that there’s a better and more cost-effective solution waiting to be developed that uses an overlapping array of inexpensive cameras.  (Lytro started out this way, incidentally, via the Stanford Multi-Camera Array)

The camera in your phone, or the one stuck in the bezel of your laptop, costs only a few bucks when purchased in quantity.   Now make yourself a 5×5 array of those things.  If each one is 8 megapixels (easily available these days), that’s a lot of resolution to play with.  No, you won’t be able to generate a single 200Megapixel image but you’ll still be able to get some serious high-rez imagery that also has a lot of additional benefits (see my next post for more on these benefits).

And yes, I’m completely ignoring a ton of engineering challenges here, but there’s nothing I’m aware of that feels like it can’t be solved within the next few years.

Bottom line:  Don’t be at all surprised if the back of a future iPhone looks something like the image at the top of this post.  I’m not gonna guarantee it, but I’m willing to take bets…

(Click here for Part 2 of this discussion)

Beyond Focus (Part 2)

So let’s talk some more about the ramifications of the lightfield/plenoptic technology that I looked at in my last post.  For Lytro, the marketing push (and the accompanying press) are all about the ability to re-focus the image after the photo’s been taken, as a post-process.   And they show you these massive foreground-to-background refocuses.  But that’s really just a parlor trick – how often are you suddenly going to decide that the photo you shot would be much better off if the background were in focus instead of the person standing in the foreground?

On the other hand, being able to do a very subtle refocus – for example to make sure that the eyes in a close-up portrait are perfectly sharp – that has real value to almost all photographers and there’s many a shot I’ve taken where I wish I could do exactly that!

But there’s actually a lot more to this technology than just refocusing. In reality what you’ve got here (or more specifically what can be computed here) is an additional piece of data for every pixel in an image – information about the depth or distance-from-camera.

So it’s not just the ability to refocus on a certain object in the image, it’s an overall control over the focus of every depth plane.  The narrow-focus ‘Tilt-Shift’ effect becomes easy.  You can even have multiple planes of focus.  And macro photography is almost certainly going to see a big benefit as well.

While we’re at it, you’ll also have the ability to choose the exact characteristics of the out-of-focus areas – the bokeh.  This would include the ability to create ‘stunt bokeh’ similar to what certain lensbaby  products can produce (see here).

Oh, and it’s also pretty easy to generate a stereo image pair, if you’re into that sort of thing…

But wait, there’s more!  Making use of depth information is something we do all the time in the visual effects world.  Consider the image below.

Here’s the depth image for that scene, where brighter areas are (obviously) closer to camera.

In the same way that we can use this information to choose where to focus, we can apply other image adjustments to different depth areas.  Want to introduce some atmosphere?  Just color-correct the ‘distant’ pixels to be lower contrast.

From there it’s just a step away to compositing new elements into the scene – anything from a snowstorm to ravenous rampaging raptor robots.

Bottom line?  Computational photography in its many forms is the future of photography.  We’re almost certainly going to see the single-lens, single-sensor paradigm go away and we’re absolutely going to live in a world where more and more of the image-creation process occurs long after you’ve actually ‘taken’ the photograph.   Personally, I’m looking forward to it!

X vs. Pro.

I’ve had a couple of people ask for my thoughts on the new FCPX release given my history with Apple and in particular my experience with how they dealt with another product that was focused (in our case almost exclusively) on professionals – the compositing software ‘Shake’.  So, even though I don’t think they’re totally analogous events, I figured I’d use it as an opportunity to make a couple of points about (my perception of) how Apple thinks.

For those that aren’t familiar with the history, Shake was very entrenched in the top end of the visual effects industry.  The overwhelming majority of our customers were doing big-budget feature film work and were, naturally, all about high-end functionality.

So after Apple acquired us there was a lot of concern that Cupertino wouldn’t be willing to continue to cater to that market and, although it took a few years, that concern did indeed come to pass.   The development team was gradually transitioned to working on other tools and Shake as a product was eventually end-of-life’d.

And back then the same questions were being asked as now – “Doesn’t Apple care about the high-end professional market?”

In a word, no.  Not really.  Not enough to focus on it as a primary business.

Let’s talk economics first.  There’s what, maybe 10,000 ‘high-end’ editors in the world?  That’s probably being generous.  But the number of people who would buy a powerful editing package that’s more cost-effective and easier to learn/use than anything else that’s out there?   More.  Lots more.  So, a $1000 high-end product vs. a $300 product for a market that’s at least an order of magnitude larger.   Clearly makes sense, even though I’d claim that the dollars involved are really just a drop in the bucket either way for Apple.

So what else?  I mean what’s the real value of a package that’s sold only to high-end guys?  Prestige?  Does Apple really need more of that?  Again, look back at Shake.  It was dominant in the visual effects world.  You’d be hard-pressed to pick a major motion picture from the early years of this century that didn’t make use of Shake in some fashion.  And believe me, Lord of the Rings looks a lot cooler on a corporate demo reel than does Cold Mountain or The Social Network.  Swords and Orcs and ShitBlowingUp, oh my.  But really, so what?

Apple isn’t about a few people in Hollywood having done something cool on a Mac (and then maybe allowing Apple to talk about it).  No, Apple is about thousands and thousands of people having done something cool on their own Mac and then wanting to tell everyone about it themselves.  It’s become a buzzword but I’ll use it anyway – viral marketing.

And really, from a company perspective high-end customers are a pain in the ass.  Before Apple bought Shake, customer feedback drove about 90% of the features we’d put into the product.  But that’s not how Apple rolls – for them a high end customers are high-bandwidth in terms of the attention they require relative to the revenue they return.  After the acquisition I remember sitting in a roomful of Hollywood VFX pros where Steve told everybody point-blank that we/Apple were going to focus on giving them powerful tools that were far more cost-effective than what they were accustomed to… but that the relationship between them and Apple wasn’t going to be something where they’d be driving product direction anymore.  Didn’t go over particularly well, incidentally, but I don’t think that concerned Steve overmuch… :-)

And the features that high end customers need are often very very unsexy.  They don’t look particularly good in a demo.  See, here’s the thing with how features happen at Apple to a great extent – product development is often driven by how well things can be demoed.  Maybe not explicitly – nobody ever told me to only design features that demoed well – but the nature of the organization effectively makes it work out that way.  Because a lot of decisions about product direction make their way very far up the management hierarchy (often to Steve himself).  And so the first question that comes up is ‘how are we going to show this feature within the company?’  All the mid-level managers know that they’re going to have a limited window of time to convey what makes a product or a feature special to their bosses.  So they either 1) make a sexy demo or 2) spend a lot of time trying to explain why some customer feels that some obscure feature is worth implementing.  Guess which strategy works best?

And by this I don’t mean to imply at all that the products are style over substance, because they’re definitely not.   But it’s very true that Apple would rather have products which do things that other products can’t do (or can’t do well), even if it means they leave out some more basic&boring features along the way.  Apple isn’t big on the quotidian.  In the case of FCP, they’d rather introduce a new and easier and arguably better method for dealing with cuts, or with scrubbing, or whatever, even if it means that they need to leave out something standard for high-end editors like proper support for OMF.  Or, to look all the way back to the iPod, they’d rather have a robust framework for buying and organizing music instead of supporting, say, an FM radio.  And it’s why Pages doesn’t have nearly the depth of Word but is soooo much more pleasant to use on a daily basis.

So if you’re really a professional you shouldn’t want to be reliant on software from a company like Apple.  Because your heart will be broken.  Because they’re not reliant on you.  Use Apple’s tools to take you as far as they can – they’re an incredible bargain in terms of price-performance.  But once you’re ready to move up to the next level, find yourself a software provider whose life-blood flows only as long as they keep their professional customers happy.  It only makes sense.

 

ADDENDUM.  I suppose I should make it clear (since some people are misinterpreting a few things) that I’m not complaining about Apple’s decisions with regards to either Shake or FCPX.  (As a stockholder I’ve got very little to complain about with regards to Apple’s decisions over the past several years :-))

And, in spite of the fact that MacRumors characterized this post as saying “Apple Doesn’t Care about Pro Market” I don’t believe at all that ‘professionals’ should immediately flee the platform.  As with anything, you should look at the feature set, look at the likely evolution, and make your own decisions.  My perception of the high-end professional category is informed by my history in feature-film production, which is a large, cooperative team environment with a whole lot of moving pieces.  Yours may be different.

Ultimately my goal was to shed some light on the thought-processes that go into Apple’s decisions, and the type of market they want to address.   Bottom line is that I do think that FCPX will provide incredible value for a huge number of people and will undoubtedly continue to grow in terms of the features that are added (or re-added) to it.   Just don’t expect everything that was in FCP7 to return to FCPX because they’re really different products addressing different markets.  It’s up to you to decide which market you work in.

Your camera, it lies!

If you’re at all tuned in to the world of digital photography you’re probably already aware of RAW files and why you should use them. But just in case you’re not, the quick answer is that RAW files allow you to preserve as much data as possible from what the sensor captured. This is in contrast to what happens when you shoot JPEG, where your camera makes some fairly arbitrary decisions about what it thinks the photo should look like, often throwing away a lot of detail in the process. Specifically it will throw away details in the high end of the image – the brightest parts.

But what a lot of people don’t realize is that even if you are shooting RAW, any review you do when looking at the display on the back of the camera is also not showing you any of that highlight detail. In other words, even if you’re shooting RAW your camera will only show what you’d get as if you were shooting JPEG.

Why is this an issue? Because it means that you may be tempted to underexpose the image in order to prevent, for example, your sky from ‘blowing out’. Even though in reality your RAW photo may have already captured the full tonal range of the sky.

Here’s a quick example – first of all, a photo that I took directly off the display on the back of my camera. (Yes, this is somewhat meta :-).  As you can see, the sky appears to be completely overexposed to white.

Here, I even made an animated GIF that shows you what the flashing overexposure warning on my camera looks like  (you may have to click on the image to see the animation).

Yup, the camera is telling me that, without a doubt, I’ve really overexposed this shot and have lost all that detail in the brightest part of the sky.  And if I had indeed been shooting jpeg this would be true.  I’d get home and view the file on my computer and I’d get an image that looks pretty much the same as what the display showed me, i.e. this:

But I wasn’t shooting JPEG, I was shooting RAW.  And if I take the RAW file that I actually shot and bring it into something like Aperture or Lightroom and work a little magic, you can see that there’s a whole bunch of beautiful blue sky in the allegedly overexposed area. Like this:

Tricky tricky.  Note that even the histogram display is misleading you – it, too, is showing you data relative to the JPEG file, not the underlying RAW file.

See how the right side of the histogram slams up against the wall?  This also indicates that you’ve clipped data.  Only you haven’t.

So beware, gentle readers.  Beware of the camera that lies.  Unfortunately there’s no good solution to this other than to develop some instincts about how much you should (or shouldn’t) trust your camera. It sure would be nice if manufacturers offered the option to display the image (and histogram) relative to the complete data captured in the RAW file but I’ve yet to find a camera that allows for this. Anybody seen one?

In the meantime, just be aware of the situation and plan your exposures accordingly!

Camera companies are (still) stupid

Seriously, why the hell, in this rather modern world we live in, aren’t cameras able to just copy files directly to an external hard drive without needing a computer in the middle?  It’s not that hard people…

</rant>

So with that in mind, here’s a challenge for the wonderful CHDK hacker community – can you develop this capability?  Bonus points for repurposing the essentially worthless ‘Direct Print’ button into a quick backup-to-external-disk control.  That would be sweet!

 

Shades of Gray

There are two types of people in the world – those that divide things into two categories and those that don’t. I’m one of those that don’t :-)

Okay, look… I get it. Our brains are pattern-finding devices and nothing’s easier than putting things into either/or categories. But not everything is easily categorizable as Right or Wrong, Good or Evil, Black or White… in fact virtually nothing is. It’s a mixed-up muddled-up shook-up world, right? So when I get into discussions with people who see nothing but absolutes, I tend to push back a bit. At least in my opinion, that sort of Dualistic worldview is not just lazy thinking, it can quickly grow dangerous if it prevents people from considering all aspects of a situation.

The particular conversation that sparked this blog post somehow turned to the Taijitu – the classic Yin/Yang symbol – which has for a lot of people has apparently come to embody this black-and-white worldview. Disregarding the fact that the Taijitu is a lot more nuanced that that, and is more about balance than absolutes, I decided to see if I could come up with something that more explicitly acknowledges the shades of gray that exist in the real world. A bit of imageprocessing mojo later, and I had this:

From a distance it maintains the general appearance and symmetry of the classic yin/yang ratio but up close we see the real story – the edges are ill-defined and chaotic, nowhere is the symmetry perfect, and most of it is composed of shades of gray.

I’m not sure what exactly to do with this now, but just in case anybody thinks it’s cool I’ve stuck a high-resolution version of it up on flickr with creative common licensing. Do with it what you will, and if someone wants to put it on a T-shirt or something, could you send me one? :-)

Get out of Bed!

Angel Falls, Venezuela

Most of my photography is done while traveling and thus I do a lot of landscape work.  And one of the hardest things about landscape photography (other than, sometimes, the actual process of getting to the destination itself) is the fact that you have so little control over the subject.  Things like the weather can have a huge impact, obviously – blue skies can’t be prearranged ahead of time.  But something that you can at least plan for, if not control, is the lighting.  Yes, every single day you’ve got that big light-source in the sky behaving in a very predictable fashion.  So use that knowledge!

An excellent case in point comes from my recent trip to Venezuela and a visit to Angel Falls.  I’ve already talked about this on the This Week in Photography podcast but because photo discussions often benefit from actual, um, photos, I wanted to go ahead and do a quick blog post about it.

Getting to the falls isn’t trivial .Start with a puddle-jumper plane from Caracas to a small town in the middle of nowhere that has no roads leading to it – everything comes in via plane.  Then several hours upriver in a very uncomfortable wooden canoe equipped with an outboard motor.  And then about an hour’s trek through the jungle.  But eventually we made it to a nice overlook of the falls, arriving sometime in the late afternoon.

And the sight was indeed spectacular and massive and awe-inspiring – the tallest waterfall in the world!   Took several photos of course and overall they were… fine.  Not particularly special other than the subject itself but certainly if that’s the best I’d gotten I would have been perfectly happy with it all.  Here’s one such shot from that afternoon:

angel falls, venezuela

As you can see, the sky is mostly overcast and thus the lighting was pretty flat but that’s just the way it works sometimes.  And so we hiked back to where we would spend the night – a tin-roofed, open-air structure with a bunch of hammocks that we could curl into as the evening’s thunderstorm rolled in.

But ah, it was the next morning when the magic happened.  I woke just around dawn – feeling a bit chilly and not particularly inclined to get out from the cozy blanket that I was wrapped in.  Still, I could see that the sky was reasonably clear and thus that morning sun might actually be doing something useful for me.

Everybody else in our group was still asleep so I moved as quietly as possible when pulling out my camera and pulling on my shoes and heading down the path to a decent lookout point I’d scouted out the night before.  And as I pushed through the foliage and got to the edge of the river where the view of the falls was unblocked, I could see that getting out of bed (or out of hammock, as the case may be) had been a very good decision indeed.  The morning light was hitting the side of the tepui at the perfect angle, the waterfall was highlighted almost as if it had a spotlight on it, and the clouds were interesting and well-placed.  I fired off a few different shots, playing with the framing, as more clouds started to move in.  Of all those shots, the one shown at the top of this post is the one that really nailed it for me. (click here if you want to get to a full-sized version)

So there you have it – the difference between mid-afternoon light and what you can get during the magic hours at dusk and dawn.   Timing, as they say, is everything.

As it turned out my morning timing was about as good as it could get.  Looking at the timestamp on the photos, the shot above was taken at 6:15:04 am.  This next shot was taken at 6:20:17.

angel falls in fog

If you look closely you can see that somewhere behind the cloud that has moved between me and the waterfall there is still a nice spot of light on the cliff-face but from this location that’s not doing me much good.  Quite a difference in a matter of only 5 minutes and 13 seconds!

Some Recent Photos…

Finally managed to wade through various photos taken in the last couple of months – Mount Rainier, some Pacific Northwest coastline, and then a bit of hiking in Utah (Zion Nat’l Park and then Bryce Canyon). Flickr sets are at:

Mount Rainier: http://www.flickr.com/photos/ronbrinkmann/sets/72157622832110063/

Washington’s Olympic Peninsula: http://www.flickr.com/photos/ronbrinkmann/sets/72157622852074414/

Seattle to LA: http://www.flickr.com/photos/ronbrinkmann/sets/72157623000444004/

Zion and Bryce: http://www.flickr.com/photos/ronbrinkmann/sets/72157622692216303/

Here’s some samples…

Invisible Pencils

Okay, take a look at the photo above. Notice anything strange about it? Probably not, but in fact there’s something quite different about it, or at least about the camera setup used to take the photo. Here, let me show you:

Yes, that’s right, the camera has a pencil rubberbanded right across the middle of the lens. Look back at the original photo… no pencil! So what gives?

This subject came up on the This Week in Photography podcast a few weeks ago where someone asked why the scratches on the front of his lens weren’t showing up in his photos. And the reason has to do with the fact that shooting with a large aperture will do an amazing job of hiding dirt or scratches on the front of a lens (or on a filter in front of the lens).

Without getting too deep into the optics behind this phenomenon, it primarily has to do with the fact that a wide open aperture means that you’re gathering light from the entire surface of the lens and focusing it on each element in your sensor. When you ‘stop down’, i.e. set the lens to a smaller aperture, every pixel in your photo is based on light from a much smaller area of the lens. The net result is that with a lens set to a large aperture, about all you’ll notice when you’ve got something blocking a portion of the lens is a decrease in the brightness of the image. Here’s the original image side-by-side with a shot taken without the pencil, using exactly the same f-stop, shutter speed, iso and lighting conditions. The image on the right is brighter, due to fact that light isn’t being blocked by a pencil…

So how is this information useful to the average photographer? Well, I can think of at least two times I’ve used it to my benefit. The first was when I wanted to get a photo of something that was behind a chain-link fence. Shooting at some small aperture like f16 would put an ugly bit of the fence across part of my image but dropping down to a wide aperture would make the fence disappear.

Another more common example is when shooting in conditions where water drops can end up on the lens – either if it’s raining or if you’re standing next to a waterfall, perhaps. Shooting at a wider aperture gives the photographer the ability to ignore a little bit of spray without fear of ruined photos.

Here’s a little animated gif where I shot a sequence of frames, decreasing the aperture each time. The range is from f1.4 down to f22.

Now, to be fair, there are subtle degradations that can happen to the image, including a loss of sharpness and various diffraction artifacts. But it’s a nice trick to have in your repertoire of techniques.

Anybody else have good examples of using a wide aperture to make something invisible?

UPDATE:  A couple of good notes from the comments.  My buddy Joseph Linaschke notes that this trick is also useful if you’ve got a bit of dirt on the actual image sensor of your camera, and Hugh makes the great point that this is effectively changing the shape of the aperture, which means you can use it to modify the characteristics of the bokeh in the out-of-focus areas.  Thanks guys!