I’ve been saying for a long time now that cameras need to evolve to where they’re an open computing platform. To where all of the hardware on the device can be programmatically controlled by onboard software applications. Unfortunately we haven’t seen a whole lot of movement in this area from the big camera manufacturers, other than a bit of SDK support by Canon and (finally) Nikon and some interesting but cumbersome hackish options.
I know that part of the reason for this is that the software/firmware on a camera isn’t really designed with this in mind – it’s not necessarily an easy change to develop an architecture that would support 3rd party applications. Which is why I’m starting to think that this will end up being solved in the other direction – by dedicated computing platforms that also happen to have camera capabilities. Platforms like, for instance, the iPhone.
Now clearly the title of this post is intended to be a bit ridiculous. I’ll be the first in line to talk about how crap the current iPhone camera is. But the limitations are primarily due to the hardware. And camera hardware, at least up to a certain point, is a really cheap commodity.
So let’s talk a little bit about what I could do if I had a device with decent camera hardware (a reasonably good sensor, onboard flash, maybe some other worthwhile stuff like GPS, WiFi, etc.) along with full access to that hardware via a real programming interface like the one available for the iPhone. Here’s just a few ideas:
-Timelapse. This is such a simple feature from a software perspective yet many (probably most) cameras don’t support it. For my DSLR I need to buy an external trigger with timer capabilities. Why? (Other than the obvious answer, which is because it allows the camera manufacturers to charge a ridiculous amount of money for such a remote trigger. Hint – buy an ebay knockoff instead).
-Motion detection. It’s a simple enough algorithm to detect movement and only trigger a photo when that occurs. Yes, this would make it easy to set up a camera to see if the babysitter or the house-painters wander into rooms they shouldn’t, but it would also be very interesting to use with, for example, wildlife photography.
-Allow me to name files sensibly and according to whatever scheme I want. Right now I rename my images as soon as they’re loaded onto my computer (based on the day/time the photo was taken) but why can’t my camera just do that for me automatically? And do it according to any scheme I care to specify.
-Let me play with the shutter/flash timing. Allow me to do multiple-flashes over the duration of a long exposure, for instance, to get interesting multiple-exposure effects.
-Give me programmatic control over the autofocus and the zoom (if it’s servo-controlled), so I can shoot bracketed focus-points or animate the zoom while the shutter is open for interesting effects.
-Overlay a reticle on the image, so I can see the framing on different aspect ratios before I shoot the photo.
-Add a nice touchscreen display to the camera and then I’ll be able to easily choose my focus point(s) or choose a region of the image that I want to meter on.
-Activate the microphone to do a voice annotation or capture some ambient sound with every photo. Or for that matter, allow me to voice-activate the shutter. (There’s a whole world full of sound-activation possibilities I suspect…)
-Allow me to reprogram the buttons and dials on the camera in any way I see fit. Any of the buttons/dials, not just a couple of special-case ones…
Some of this hardware control could go down to a very deep level. Let me play with the scanning rate of the rolling shutter, for instance, to give me super jello-cam images.
And then there’s a whole huge variety of general image-processing operations that could be applied in the camera, from custom sharpening algorithms to specialized color-corrections to just about anything else you currently need to do on your computer instead. HDR image creation. Tourist Removal. Etc. Are some of these better done as post-processes rather than in-camera? Sure, probably. But you could make the same claim about a lot of things – it’s why some people shoot RAW and others are happy with JPEG. Bottom line is that there are times where you just want to get a final photo straight out of the camera.
(of course the cool thing would be to have the camera create the processed JPEG while also storing the original RAW file. And then output a metadata file that, when read into Aperture or Lightroom or whatever your RAW processing package of choice is, would duplicate the image processing operations you’d applied in-camera but then allow you to add/remove/modify as you see fit).
And if your camera includes additional useful hardware like GPS or network access or accelerometers then you’ll have even more functionality you can access. Take a look at the iphone app Night Camera which uses the accelerometer to measure when the camera reaches a relatively stable position before taking a photo. A really smart idea and one that will almost certainly show up in other ‘real’ cameras relatively soon. (Incidentally, it’s worth noting that the main difficult with algorithmically removing motion blur caused by camera shake is due to the uncertainty about exactly how the camera moved while the photo was being taken – if we were to record accurate accelerometer readings over the duration of the time that the shutter was open, it becomes much easier to remove the blur as a post-process).
Now, before I get a bunch of replies stating “my camera can already do that” on some of the items listed above, let me reiterate my original point – a camera with the ability to run arbitrary camera-control software would be able to do all of the above items. And a lot more. The real fun will start when when lots of people are brainstorming about this and then churn out features that nobody has thought of yet. Who knows what this might include?
Some of these feature may not feel terribly important or may even seem ‘gimmicky’, but all it takes is a single special-case situation where you need one of them and you’ll be glad they’re part of your software arsenal.
Back to the iPhone, it’s worth mentioning that we’re already seeing some of these sorts of applications show up. There are panoramic stitchers and the aforementioned accelerometer function and multiple-exposure resolution enhancers and even a very nice timelapse application. We may also end up seeing a sort of hybrid solution, with a portable device like the iPhone is tethered to the camera to give the desired capabilities.
I’m convinced that this is where the future of the camera is going, at least from a software perspective* and the features listed above are just a small subset of what we’ll end up seeing. But you tell me – If you had full software control over your camera, what would you do with it?
*The future of the camera from a hardware perspective is a whole other ball of wax but I’ll save my thoughts on that for a later post. For now I’ll just say that I’m betting that the current paradigm of single (expensive) lens + single (expensive) sensor will eventually be replaced by multiple lenses and sensors and some sophisticated stitching software. Some sort of Redundant Array of Inexpensive Lenses and Sensors (I call it RAILS). I’ll try to get a post together on that topic as soon as I get a chance.
I will agree that the camera in the iPhone is not something to lust after but I recently decided that the convenience factor is hard to beat. While I tote my d200 with me everywhere I go there are times that I just want to take a quick shot and the effort of getting the bag out changing lenses and all that was a bit too much. That is why I have been working on the theory that the iphone camera is not great but if the photo that you make hides that fact then it can still be considered a important piece of equipment. To prove (or disprove.. you judge) this fact I have put together a method of creating what I consider decent photos completely produced inside the iphone. From taking the photo to editing.. its all done sans computer and then uploaded to flickr to share.
Feel free to make that decision yourself.
Opps….last comment didn’t take the URL for some reason. Here you go!
This will happen as soon as Apple enhances the iPhone SDK to allow access to the USB connector on the iPhone and iPod touch. I would put good odds on seeing this in the next few months.
Won’t that be sweet? Nikon could make a camera with an iPod dock, whose entire UI could be running on the iPod. My iPhone software company will be all over that when it happens. Well, if it happens.
Meanwhile, why hasn’t anybody created a high-quality focusable snap-on zoom lens for the iPhone?
(nice subtle plug Dave :-)
But yeah, it seems like we’re past due someone coming out with a device that has the form-factor of the mophie (http://www.mophie.com/products/juice-pack) but that incorporates some nice folded-lens optics like you see on a lot of compact point-and-shoots. Nothing technically difficult about it really, someone just needs to do it.
And I’m definitely looking forward to what Avatron comes up with once you’ve got access to the USB connector…
Pingback: iPhone: The Perfect Camera? | :: superiphoneblog
Very nice, Nathan. If you consider the iPhone’s limitations a sort of frame, and believe G.K. Chesterton’s claim that “Art is limitation; the essence of every painting is its frame,” you can create nice work within the 2 MPixel fixed-focus constraints.
Pingback: TWIPPHOTO.COM » iPhone: The Perfect Camera? - TWIPPHOTO.COM
I agree with the filenaming and would like to see a $1000 red dslr to push this sort of software openness. I would like to see metadata profiles to be added in camera and DNG with GPS addition to streamline all of the stuff I do on import. I want to streamline the process in a custom fashion, and am afraid canon and nikon won’t rush to support that until they are pushed by market forces like Red, if they get arount to it either.
Pingback: Mobile Hub-- Random Neural Misfirings
iPhone remote for Canons:
But it is pretty useless considering all the limitations, e.g. camera must be tethered to PC via USB. Can only be used via wifi, et cetera.
Only in the most limited situations would this be a better solution that a basic remote trigger. Or even just a netbook connected to the tethered PC via wife and using remote desktop.
I guess it is good to see people working on it, too bad the Apple/AT&T lockdown/appstore will never let these apps work over 3G (see slingbox app).
Pingback: iPhone: The Perfect Camera? « Digital Composting « dannydv
Well, what about the CHDK model. Open source “load into ram” software that doesn’t require the computer to be attached. Pretty much anything a covered Canon P&S is physically capable of has been added, including writing your own scripts. I use it for timelapse, but there’s also motion detection, bracketing of almost any conceivable type, live histograms, simultaneous JPG and RAW saves (in DNG format if you prefer), and the list goes on, and on. And all this for my dinky little $100 A590. It’s too bad the 5DMkII doesn’t have this flexibility. But it could.
Maybe we just need the camera to have an API and some extra ram, and enough flexibility in the firmware design so that this kind of add-in functionality programming can be done by whoever wants to write it.
I’m pretty sure that some development company somewhere must be able to provide this functionality… and as for a snap-on lens – they already do a decent one for the nokia n95 so maybe a quick look on ebay might find something?