…Because their Lips are Moving

One of the earliest posts I did on this blog related to new technologies in truth-detection and as the political season is heating up again I thought it would be worthwhile to revisit some of those points.  (Here’s the original post)

In particular, I’m interested in the variety of non-invasive technologies that are becoming available to tell whether or not a person is consciously lying.  To a greater or lesser extent, most normal (non-sociopathic) people have some sort of physical manifestation whenever they intentionally lie.  This can manifest as micro-expressions, as fluctions in pitch, frequency and intensity of the voice, and even bloodflow to the face (which can be detected by an infrared camera.)

Are these technologies 100% reliable as lie-detectors?  Not even close.  But they’re also not completely without merit and can, particularly if they’re used in conjunction with other techniques, be very effective.

More importantly, they’re only going to get better – probably lots better. And so even though we may not have the technology yet to accurately and consistently detect when someone is lying, we will eventually be able to look back at the video/audio that is being captured today and determine, after the fact, whether or not the speaker was being truthful.   A bit of retroactive truth-checking, if you will.

In other words, even though we may not be able to accurately analyze the data immediately, we can definitely start collecting it. Infrared cameras are readily available, and microexpressions (which may occur over a span of less than 1/25th of a second) should be something that even standard video (at 30fps) would be able to catch and of course we’ve got much higher-speed cameras than that these days. And today’s cameras should also have plenty of resolution to grab the details needed, particularly if you zoom in on the subject and frame the face only.

So it seems to me that someone needs to plan on recording all the Presidential (and vice-presidential) debates with a nice array of infrared, high speed, and high-definition cameras. And they need to do it in a public fashion so that every one of these candidates is very aware of it and of why it is being done.

Or am I just being naïve in thinking that the fear of eventually being identified as a liar would actually cause people (or even politicians) to modify their current behavior? Maybe, but it seems like it’s at least worth a shot.


CAMERA FEATURES I WANT (And that really really shouldn’t be all that hard to implement)

Since camera manufacturers continue to annoy me with their unwillingness to add the features I want (yes, I’m taking this personally) I guess I’m about due for another blogpost ranting/whining about it.  I’ve covered some of these things before in this blog (and quite often on the This Week in Photo podcast) but it seems worth having an up-to-date list of some obvious stuff.

Let’s set the ground rules first.  I’m not going to be talking about radical scientific breakthroughs here.  ISO 1billion at 100 Megapixels (capturing a dynamic range that includes both coronal activity on the sun and shadow details on a piece of coal… all in the same photo) will be cool, definitely.  But there’s some fundamental physics that still need to be dealt with before we get there.  No, all I’m asking for today are engineering – often software – features.  Things that really only require desire/resources and design/integration.

At this point it’s no secret that more people take photos with their phones than with dedicated cameras.  Obviously the main reason for this is because The Best Camera Is The One That’s With You, coupled with the paradigm-shift that comes with network connectivity to friends and social networks.  As an iPhone developer I know how crazy it would be to create a fun app – even one as fun as FreezePaint – without including the ability to share the images to Facebook/Twitter/Flickr/Instagram once they’ve been taken.

But beyond all that, what else does a ‘camera’ like the iPhone give us?  Tons, really, and the bottom line is that you can do photo things with your phone that no ‘traditional’ camera can touch.  (And this in spite of the fact that the iOS development environment is woefully poor at giving deep-down access to the camera hardware).  Because camera phones are an open platform, unbounded by a single company’s vision for what a camera can or should do.

So let’s postulate some basics – a standalone camera with a good sensor, a nice array of lens options, and a reasonably open operating system under the hood that allows full programmatic access to as much of the hardware as possible.  What can we come up with?   (And a caveat before we continue:  Yes I know that there are some cameras that can already do some of the things I’m going to mention.  Some of the things.  Sort of.  We’ll talk about that too.)

First, a few more basic wishlist items on the hardware side of things.

— Radios. Access to the same data network that my phone can get, and also shorter-range communication via bluetooth, WiFi. (And while you’re at it, couldn’t you just stick something in there that grabs the local time via radio signal and sets my camera’s time automatically?  I’ve got a $20 watch that can do this…)

With network and remote communication in place, all sorts of things happen.  The ‘sharing’ stuff mentioned above of course, but also things like remote control of the focus and shutter via my phone (because the phone is getting a live video feed directly from the camera’s sensor).

Seriously, take a look at this little beauty of an app for controlling an Olympus E-M5 via iphone.  I love the features it supports. Timelapses with accelleration. Take timelapse photos based on distance moved (i.e. every 50 feet or something). Sound-detection to trigger shutter. Etc.  Only bummer is that it requires extra hardware on both phone and camera to communicate.

What else does having a networked camera give?  Hopefully I can buy flash devices that talk nicely too, via some standard transport like bluetooth or WiFi.  Radio Poppers and Pocket Wizards are great but these are professional-level tools that are way overkill (and overpriced) for what I need most of the time.  I just want to hold my camera in one hand and a flash unit (no bigger than a deck of cards, hopefully) in the other hand, extended off to the side to give a pleasing angle on the subject.

(Brief tangent on lights:  For a lot of closer-up work (not to mention video), it sure feels like continuous-light LED sources are starting to be a very viable alternative to xenon flash devices.  These too need to get remote-control friendly – take a look at the awesome kickstarter-funded ‘Kick’.  Sure would be cool, too, if I could buy a light/flash that could take the same batteries as my camera…but I’m getting way off-topic now).

— Sensors.  In this situation the word ‘sensor’ refers to all the auxiliary data that you may want to collect, i.e. accelerometers and gyroscopes and altimeters and compasses and GPS units.  Phone’s already got ‘em – put those in my camera too, please.  If you add it (and allow programmatic access to it), all sorts of cool things will start showing up, just as they have on phones.

For example, use information from the accelerometer to measure the camera’s movement and snap the photo when it’s most stable.  These accelerometers are sensitive enough that they can probably measure your heartbeat and fire the shutter between beats.

(It’s also worth noting that the main difficulty with algorithmically removing motion blur caused by camera shake, usually via deconvolution, is because of the uncertainty about exactly how the camera moved while the photo was being taken.  If we were to record accurate accelerometer readings over the duration of the time that the shutter was open and then store that in the EXIF data, it suddenly becomes much easier to remove camera-shake blur as a post-process).

Combine GPS with network connectivity so my camera’s positional data can be at least as accurate as my phone (i.e. A-GPS).  Also toss in a compass.

(Incidentally, the desire to have accurate location information isn’t just about remembering where you were when you took a photo.  There’s the much cooler technology coming down the pike that will allow for re-creating the actual geometry of the place where you’re standing if you’ve got enough views of the scene.  Take a look at Microsoft’s photosynth, Autodesk’s 123D.   And thus the more people who are posting geotagged information on the world around them, the more we’ll be able to better characterize that world.)

Yeah, there will be a battery-drain penalty if I have all these sensors and radios operating constantly.  Big deal.  Let’s see how much a Canon 5DmkIII battery goes for on ebay these days… Oh look – about TEN DOLLARS.  For that amount I think I can manage to have a few spares on hand.

— Lots of buttons and dials, sensibly placed.  More importantly – allow me to reprogram the buttons and dials in any way I see fit.  Any of the buttons/dials, not just a couple of special-case ones.  (And touchscreens don’t eliminate the need for this.  Too fiddly when you’re trying to concentrate on just getting the shot.)  If you’re really sexy you’ll give me tiny e-ink displays for the button labels so I can reprogram the labels too…

— USB support.  Not just as a dumb device I can plug into a computer, but as a host computer itself.  Like I talked about here, it sure would be nice if my camera had host-device USB capabilities so I could just plug an external drive into it and offload all my photos onto some redundant storage while I’m traveling.  Without needing to carry a laptop with me as well.  (Or having to purchase some overpriced custom card reader with built-in storage.)

And… incidentally… how about letting me charge the battery in the camera by plugging the camera into a USB port.  I understood that there may be power issues (someone out there want to figure out the tradeoff for charge times?) but just give me the choice to leave the bigassed battery-charging cradles at home.  (Or, more specifically, to not be F’d when I accidentally forget it at home!)

(In terms of wired connectivity there’s also Thunderbolt to consider but it smells like it’s a few more years out.)

Finally, for the super-geeks, it sure would be cool to get access to the hardware at a really deep level.  I’m talking stuff like allowing me to play with the scanning rate of the rolling shutter, for instance, if I want to make some funky super-jellocam images.

Okay, let’s dive into software-only stuff.  Here are a few things that would be almost trivial to implement if I had even basic programmatic access to my camera’s hardware.

— Allow me to name files sensibly and according to whatever scheme I want.  Right now I rename my images as soon as they’re loaded onto my computer (based on the day/time the photo was taken) but why can’t my camera just do that for me automatically?  And do it according to any scheme I care to specify.

— Timelapse/Long Exposure.  This is such a simple feature from a software perspective yet many (probably most) cameras still don’t support it.  For my DSLR I need to buy an external trigger with timer capabilities.  Why?  (Other than the obvious answer, which is because it allows the camera manufacturers to charge a ridiculous amount of money for such a remote trigger.  Hint – buy an ebay knockoff instead).

— Motion detection.  It’s a easy enough algorithm to detect movement and only trigger a photo when that occurs.  Yes, this would make it easy to set up a camera to see if the babysitter or the house-painters wander into rooms they shouldn’t, but it would also be very interesting to use with, for example, wildlife photography.

— Let me play with the shutter/flash timing.  Allow me to do multiple-flashes over the duration of a long exposure, for instance, to get interesting multiple-exposure effects.

— Give me programmatic control over the autofocus and the zoom (if it’s servo-controlled), so I can shoot bracketed focus-points or animate the zoom while the shutter is open for interesting effects.  I mean Lytro [link] is cool and all, but most of the time I don’t want to radically refocus my image, I just want to tweak a slightly missed shot where autofocus grabbed the bridge of the nose instead of the eyes.  If my camera had automatically shot 3 exposures with focus bracketing I’d be able to pick the best, but also I’d be able to use some simple image manipulation to stack them and increase the DOF.

(Brief sidetrack here.  Yes, some of the coolcool stuff will require fairly sophisticated algorithms to accomplish properly in a lot of situations.  So what?  There are, in fact, a lot of sophisticated algorithms out there.  There’s also a lot of stuff that can be done without sophisticated algorithms, just by using elbow grease and a few hours in Photoshop.  Give me the raw material to work with and I’ll figure out the rest!)

— Activate the microphone to do a voice annotation or capture some ambient sound with every photo.  Or for that matter, allow me to voice-activate the shutter.  ( “Siri, take a photo please.”)

Actually, let’s talk about that a bit more.  Voice-control for taking a photo may not be all that exciting, although I can certainly see situations where I’m trying to hold my camera in some strange position in order to get a particular point of view and the shutter-button has become difficult to reach easily.  (Example?  Camera on a tripod that I’m holding above my head to get a very high angle, with LCD viewfinder angled downward so I can see the framing.)   Where’s my voice-activated Digital Camera Assistant when I need her?

But beyond that, think about all the camera settings that you normally have to dig through a menu to find.  Custom-programmable buttons & dials are great, but there’s always going to be a limited number of them.  Being able to quickly tell the camera to adjust the ISO or turn on motor-drive (AKA burst or continuous mode) just might make the difference between getting the shot and not.

Finally, there’s a whole huge variety of general image-processing operations that could be applied in-camera, from custom sharpening algorithms to specialized color-corrections to just about anything else you currently need to do on your computer instead.  HDR image creation.  Panoramic stitching.  Tourist Removal.  Multiple-exposures to generate super-resolution images.  Etc., etc.  Are some of these better done as post-processes rather than in-camera?  Sure, probably.  But you could make the same claim about a lot of things – it’s why some people shoot RAW and others are happy with JPEG. Bottom line is that there are times where you just want to get a final photo straight out of the camera.

Having said that, let’s talk about the proper way to do some of these fancy things like creating HDR imagery.  There are already cameras that can, for example, auto-bracket and then create HDR images.  Sort of.  More specifically what they’re doing is internally creating an HDR image, then tone-mapping that wide dynamic range down to an image that looks ‘nice’ on a normal monitor, and then saving that out as a jpeg.  (‘Nice’, incidentally, being an arbitrary mapping that some dude in a cubicle came up with.  Or maybe a group of dudes in a conference room.  Whatever… it’s still their judgement on how to tone-map an image, not mine.)

Better thing to do?  Shoot an exposure-bracket of the scene and combine them (after auto-aligning as necessary) into a true high dynamic range image.  Save that as a high bit-depth TIFF file or DNG or (better) OpenEXR or something.  You can give me the jpeg too if you want, but don’t throw away all the useful data.  In other words, let me work with HDR images in the same way I work with RAW images because that’s exactly what a RAW file is… a somewhat limited high dynamic range file.

This mentality is the sort of thing I’d like to see for all sorts of in-camera software algorithms – give me something useful but don’t throw away data.

I could probably list dozens of additional image-processing algorithms that it would be cool to have in my camera (not to mention video-related tools).   Some of the features mentioned above may not feel terribly important or may even seem ‘gimmicky’, but all it takes is a single special-case situation where you need one of them and you’ll be glad they’re part of your software arsenal.

So who’s going to make a camera like this this?  Who’s going to make the hardware and who’s going to make the software/OS?  In terms of hardware I’m betting it won’t be Canon or Nikon – they’re too successful with their existing systems to create something as open as what I’m describing.  Probably not Sony either – they’ve always relied on proprietary, closed hardware too (look how long it took for them to realize that people were avoiding their cameras simply because they didn’t want to be tied to the Sony-only ‘memory stick’).

I guess I’m hoping that someone like Panasonic or Olympus steps up to the plate, mostly because I love the Micro 4/3 lens selection.  But maybe it will be a dark-horse 3rd party – a company that sees the opportunity here and is willing to invest in order to break into the digital camera marketplace with a huge splash.  Might even be someone like Nokia – does the recent 41 megapixel camera phone indicate a potential pivot from phone manufacturer to camera manufacturer?

In terms of software I know I’d love to see a full-featured camera running iOS, but realistically I suspect that a much more likely scenario is that we’ll get a camera running a version of Google’s Android OS.   Or hell, maybe Microsoft will step up to the plate here.  Crazier things have happened.

But bottom line is that it sure feels like someone is going to do these things, and hopefully soon.  And then the real fun will start, and we’ll see features and capabilities that nobody has thought of yet.  So you tell me – If you had full software control over your camera what would you do with it?

Announcing… FreezePaint


Yeah I know, things have been a little quiet here on the blog lately. I’ve been head-down on a couple of projects that became rather all-consuming. But the good news is that one of them has finally come to fruition and so let’s talk about it.

Because, hey, I’ve just released my first app into the iOS App store!

It’s called FreezePaint and it’s pretty fun, if I do say so myself.

The website for the app is at http://www.freezepaintapp.com and that’ll give you a rough idea of what it’s all about*.  But if I had to do a quick summary description about it, I’d say it’s a sort of live on-the-fly photo-montaging compositing remixer thingy.  Which probably makes it sound more complicated than it is.  Here, watch the video:

Of course as anybody who’s done an app can tell you, getting it launched is only the beginning of the process.  Time to put on my sales&marketing hat.

I’ve had some really great advice from some really great people (thanks, really great people!) and one of the things I heard several times was that it’s extremely important to get that initial sales bump to be as large as possible.

So anybody that’s reading this who has 99cents in their pocket and is feeling either curious or charitable, I’d be HUGELY APPRECIATIVE if you could go and buy a copy of FreezePaint. More particularly, I’d be extra hugely appreciative if you’d go buy it, like, now. Heck I’ll even offer a money-back guarantee. If you really don’t like it – don’t think you’ve gotten 99cents worth of amusement out of it, then I’ll happily Paypal a buck back to you. Simple as that.

But beyond just buying it, I’m hoping you can help spread the word a little bit. Because that’s where the real traction will come from. Fortunately we’ve made it pretty darn easy to do that because once you set things up it’s just a single button-click to share via Twitter, Facebook, Flickr or email. (Or all four at the same time!)

(And if you like it, remember that good reviews are an app’s life-blood… here, let me make it easy by providing a link to where you can leave one…  Just click here and scroll down a little bit.)

I know this all sounds rather self-serving. It totally is! I want people to use FreezePaint. I want to see what sort of wacky images you can come up with!  At this point I’m not even sure what the most common usage is going to be!  Will people spend most of their time putting their friends’ faces on their dogs?  Or will it be more of a ‘scrapbooking’ thing – a quick way to collage together a fun event or location.  Beats me.  A few of the images that beta-testers have created are at http://favorites.freezepaintapp.com – send me something good/fun/interesting/disturbing that you’ve done with FreezePaint and there’s a pretty good chance it’ll end up there too!  (There’s a button at the bottom of the ‘sharing’ page in the app that shoots your images straight to my inbox.)

I’m sure I’ll be doing several more posts about all of this – about the process of how it all came together, about the pains of trying to find the right developer to work with (which I did, finally!), about the fun of dealing with the app store submission process, etc… :-)

But for now I’m just hoping that people check it out, tell their friends, and mostly have fun!

*Also, someone remind me to write a blog at some point about what an awesome tool http://www.unbounce.com is for building product websites like this – save me tons of time.

Mirror Box


Buy a box of 6 mirror tiles ( i.e. something like this )

Use some duct-tape and make a box out of them (mirror-side inwards)

(put a little duct-tape handle on the 6th tile to make it easy to take it on and off)

Set timer on camera, (turn flash on), place camera in box, put mirror ‘lid’ onto box before shutter fires…

…get cool photo.

toss in balls of aluminum foil – because they’re shiny.

Play with light-leaks at the seams of the box.

Add candles.  Pretty.

Finally, add Droids.  Everything is better with Droids.

(Click on any of the above images to biggerize them.  Also on Flickr, including a few more.)


UPDATE:  Here’s someone who got inspired about the idea and did a MirrorBox photo-a-day of throughout January 2012  http://flic.kr/s/aHsjxK8ZKe  Cool!

Shades of Gray

There are two types of people in the world – those that divide things into two categories and those that don’t. I’m one of those that don’t :-)

Okay, look… I get it. Our brains are pattern-finding devices and nothing’s easier than putting things into either/or categories. But not everything is easily categorizable as Right or Wrong, Good or Evil, Black or White… in fact virtually nothing is. It’s a mixed-up muddled-up shook-up world, right? So when I get into discussions with people who see nothing but absolutes, I tend to push back a bit. At least in my opinion, that sort of Dualistic worldview is not just lazy thinking, it can quickly grow dangerous if it prevents people from considering all aspects of a situation.

The particular conversation that sparked this blog post somehow turned to the Taijitu – the classic Yin/Yang symbol – which has for a lot of people has apparently come to embody this black-and-white worldview. Disregarding the fact that the Taijitu is a lot more nuanced that that, and is more about balance than absolutes, I decided to see if I could come up with something that more explicitly acknowledges the shades of gray that exist in the real world. A bit of imageprocessing mojo later, and I had this:

From a distance it maintains the general appearance and symmetry of the classic yin/yang ratio but up close we see the real story – the edges are ill-defined and chaotic, nowhere is the symmetry perfect, and most of it is composed of shades of gray.

I’m not sure what exactly to do with this now, but just in case anybody thinks it’s cool I’ve stuck a high-resolution version of it up on flickr with creative common licensing. Do with it what you will, and if someone wants to put it on a T-shirt or something, could you send me one? :-)

The Gaiman Library

Did a fun little blogpost for Shelfari today – a peek into Neil Gaiman’s personal library.  Read all about it here.  

Extremely cool that Gaiman allowed us to do this, and I certainly spent a whole lot of time looking over his books and saying to myself “Hey, I’ve got that one!”  Good fun.  

Gaiman (well, his buddy photographer Kyle Cassidy, actually) sent over quite a bit more than I posted over there, but the original blogpost was getting a bit long.  So as a special bonus for readers of this blog, here’s a few more photos from Mr. Gaiman’s collection, this time featuring some of his reference books (and, oh yeah, a friggin’ Hugo Award  too).  Nice!

Click on any of the images for the larger version – you should be able to read the individual titles on the fullsized shots.







Valley Carpenter Bees

There’s a spot on my regular bike-ride route where I often take a break – a couple of benches set up along the banks of one of Los Angeles’ concrete rivers.  And when summer rolls around I’ve often noticed these awesome giant bumblebees buzzing around the flowering bushes there – bumblebees that a bit of research told me were Valley Carpenter Bees.  So on a recent ride I went ahead and grabbed a few photos (using my Panasonic LX3, which continues to prove itself as a handy carryaround macro camera). 

Now here’s the interesting thing about the Valley Carpenter Bees – they exhibit a fairly significant sexual dimorphism.  Here’s a shot of a female – jet black and rather ominous looking.  

But the males – ahh, these are truly stunning.  Honey-colored fuzzballs with dappled greenish eyes, I still remember the first time I saw one.  Definitely a “What the ?!?!” moment.  Here’s a couple of shots.


Like I said they’re pretty big – about the size of the first joint on my finger – and in fact here’s one with my thumb in the photo to try and give some scale:


A few more shots are up on flickr.

Composting/Compositing Hall of Shame

Not that I haven’t done it myself a few times (helped along by overzealous spellcheckers) but I find it rather amusing to see published books that somehow manage to confuse image-combination with organic decay. Here’s a couple of favorites:

(a few more are here)

And of course that’s why I figured I’d be better off if I owned both www.digitalcomposting.com and www.digitalcompositing.com :-)