
I’ve been saying for a long time now that cameras need to evolve to where they’re an open computing platform. To where all of the hardware on the device can be programmatically controlled by onboard software applications. Unfortunately we haven’t seen a whole lot of movement in this area from the big camera manufacturers, other than a bit of SDK support by Canon and (finally) Nikon and some interesting but cumbersome hackish options.
I know that part of the reason for this is that the software/firmware on a camera isn’t really designed with this in mind – it’s not necessarily an easy change to develop an architecture that would support 3rd party applications. Which is why I’m starting to think that this will end up being solved in the other direction – by dedicated computing platforms that also happen to have camera capabilities. Platforms like, for instance, the iPhone.
Now clearly the title of this post is intended to be a bit ridiculous. I’ll be the first in line to talk about how crap the current iPhone camera is. But the limitations are primarily due to the hardware. And camera hardware, at least up to a certain point, is a really cheap commodity.
So let’s talk a little bit about what I could do if I had a device with decent camera hardware (a reasonably good sensor, onboard flash, maybe some other worthwhile stuff like GPS, WiFi, etc.) along with full access to that hardware via a real programming interface like the one available for the iPhone. Here’s just a few ideas:
-Timelapse. This is such a simple feature from a software perspective yet many (probably most) cameras don’t support it. For my DSLR I need to buy an external trigger with timer capabilities. Why? (Other than the obvious answer, which is because it allows the camera manufacturers to charge a ridiculous amount of money for such a remote trigger. Hint – buy an ebay knockoff instead).
-Motion detection. It’s a simple enough algorithm to detect movement and only trigger a photo when that occurs. Yes, this would make it easy to set up a camera to see if the babysitter or the house-painters wander into rooms they shouldn’t, but it would also be very interesting to use with, for example, wildlife photography.
-Allow me to name files sensibly and according to whatever scheme I want. Right now I rename my images as soon as they’re loaded onto my computer (based on the day/time the photo was taken) but why can’t my camera just do that for me automatically? And do it according to any scheme I care to specify.
-Let me play with the shutter/flash timing. Allow me to do multiple-flashes over the duration of a long exposure, for instance, to get interesting multiple-exposure effects.
-Give me programmatic control over the autofocus and the zoom (if it’s servo-controlled), so I can shoot bracketed focus-points or animate the zoom while the shutter is open for interesting effects.
-Overlay a reticle on the image, so I can see the framing on different aspect ratios before I shoot the photo.
-Add a nice touchscreen display to the camera and then I’ll be able to easily choose my focus point(s) or choose a region of the image that I want to meter on.
-Activate the microphone to do a voice annotation or capture some ambient sound with every photo. Or for that matter, allow me to voice-activate the shutter. (There’s a whole world full of sound-activation possibilities I suspect…)
-Allow me to reprogram the buttons and dials on the camera in any way I see fit. Any of the buttons/dials, not just a couple of special-case ones…
Some of this hardware control could go down to a very deep level. Let me play with the scanning rate of the rolling shutter, for instance, to give me super jello-cam images.
And then there’s a whole huge variety of general image-processing operations that could be applied in the camera, from custom sharpening algorithms to specialized color-corrections to just about anything else you currently need to do on your computer instead. HDR image creation. Tourist Removal. Etc. Are some of these better done as post-processes rather than in-camera? Sure, probably. But you could make the same claim about a lot of things – it’s why some people shoot RAW and others are happy with JPEG. Bottom line is that there are times where you just want to get a final photo straight out of the camera.
(of course the cool thing would be to have the camera create the processed JPEG while also storing the original RAW file. And then output a metadata file that, when read into Aperture or Lightroom or whatever your RAW processing package of choice is, would duplicate the image processing operations you’d applied in-camera but then allow you to add/remove/modify as you see fit).
And if your camera includes additional useful hardware like GPS or network access or accelerometers then you’ll have even more functionality you can access. Take a look at the iphone app Night Camera which uses the accelerometer to measure when the camera reaches a relatively stable position before taking a photo. A really smart idea and one that will almost certainly show up in other ‘real’ cameras relatively soon. (Incidentally, it’s worth noting that the main difficult with algorithmically removing motion blur caused by camera shake is due to the uncertainty about exactly how the camera moved while the photo was being taken – if we were to record accurate accelerometer readings over the duration of the time that the shutter was open, it becomes much easier to remove the blur as a post-process).
Now, before I get a bunch of replies stating “my camera can already do that” on some of the items listed above, let me reiterate my original point – a camera with the ability to run arbitrary camera-control software would be able to do all of the above items. And a lot more. The real fun will start when when lots of people are brainstorming about this and then churn out features that nobody has thought of yet. Who knows what this might include?
Some of these feature may not feel terribly important or may even seem ‘gimmicky’, but all it takes is a single special-case situation where you need one of them and you’ll be glad they’re part of your software arsenal.
Back to the iPhone, it’s worth mentioning that we’re already seeing some of these sorts of applications show up. There are panoramic stitchers and the aforementioned accelerometer function and multiple-exposure resolution enhancers and even a very nice timelapse application. We may also end up seeing a sort of hybrid solution, with a portable device like the iPhone is tethered to the camera to give the desired capabilities.
I’m convinced that this is where the future of the camera is going, at least from a software perspective* and the features listed above are just a small subset of what we’ll end up seeing. But you tell me – If you had full software control over your camera, what would you do with it?
*The future of the camera from a hardware perspective is a whole other ball of wax but I’ll save my thoughts on that for a later post. For now I’ll just say that I’m betting that the current paradigm of single (expensive) lens + single (expensive) sensor will eventually be replaced by multiple lenses and sensors and some sophisticated stitching software. Some sort of Redundant Array of Inexpensive Lenses and Sensors (I call it RAILS). I’ll try to get a post together on that topic as soon as I get a chance.