Viral Videos, iPhone Apps… and Politicians with a Tiny Face

I’ve got a longer post coming about this but just wanted to let everybody know that we’ve got a new iPhone app out! It’s called ‘Faceship’ and for this version 1.0 release its one and only purpose in life is to give people Tiny Faces.

Why Tinyfaces? Well, I did this quick video of Mitt Romney with a TinyFace a little while ago: (Go ahead and give it a watch – it’s only a few seconds long).

Somehow it went super-viral, with over 1.75 million views so far. Which is, um, crazy.

So we figured we’d make an app for anybody to give themselves (or their friends) a tinyface. (Photos only – no video… yet)

The app’s called Faceship and it’s FREE. Not even any ads in it. Because we love you :-)

So I’d really really love it if you’d download it, give it a play-with, and most importantly, tell your friends and SHARE TINYFACE PHOTOS around the web. Apps live or die by word of mouth, so any help here would be super-appreciated. Seriously, thanks!

(Getting a nice review doesn’t hurt either – if you’re feeling generous please tell us what you think!)

(Oh, and if you don’t like the fact that the original video featured Romney, I did an Obama one too, and also a couple of other folks.  And a Donald Trump with a Bald Head.  Here’s our YouTube channel if you want a few more minutes of amusement – enjoy!)

Lifelogging – is it time?

I’ve done a bunch of talks lately, in places ranging from Hong Kong to Costa Rica, about the future of cameras and photography. (Here’s one) And one of the things I discuss is this concept of ‘Lifelogging‘ – the idea of wearing an always-on camera constantly, capturing everything you do as you go about your daily life.

At first glance I think a lot of people find the idea to fall somewhere along the line between obtrusive and completely boring. Who wants to record everything they do? Why? But the more I’ve thought about it the more I’m convinced that it’s going to become very very common. Because when it comes down to it, don’t you wish you could call up photos from all the key incidents in your life? Don’t you wish your memories were preserved beyond what the organic is capable of?

So when I saw yesterday’s announcement of a kickstarter for a tiny wearable camera (called ‘Memoto’) that will take a photo every half-minute, I wasn’t even slightly surprised. And after a few minutes of considering it I went ahead and ordered one (in spite of the rather hefty $200 pricetag.)

Is this going to be device that makes the concept mainstream? Doubt it, but it’s an interesting step towards that. I’m sure there will be stories of how it’s being misued, we’re going to hear about someone getting Punched In The Face because they wore it somewhere that’s not appropriate. (And by ‘appropriate’ I include just about anyplace where the photographee doesn’t want to be on camera).

But ultimately I think it’s inevitible that something like this will catch on with a lot of people. Biggest question for me, really, is whether or not it’s going to happen before or after the cameras get small enough to be undetectable by the people you’re interacting with on a daily basis.

For me personally I doubt I’ll wear it on a daily basis – mostly because That Would Be Weird. But as someone who loves to travel, I suspect the first trip I take after I get the device will see me clipping it on as soon as I head for the airport and not removing it again until I’m back home.

As for the device itself? I like a lot of what they’ve done. Small, rugged, decent battery life, and simple.  And they seem to have the expertise to make it happen.

The devil is in the details of course. How’s the low-light capabilities, for example? And It’ll be interesting to see what they decide on for the field of view. (Personally I suspect that the best way to go might be a very wide-angle lens – fisheye even – combined with some software that can rectilinearize it after the fact. With a 5 megapixel image you’ve got some leeway to do this, and ultimately it’s important to remember that for most people the value is going to be in capturing the moment as completely as possible, rather than creating photos that are suitable for wall-hanging).

And, as is to be expected, if you look at the comments on the kickstarter you’ve already got tons of people asking for specific features. But I like the decisions they’ve made so far in terms of keeping it small and simple and hopefully they’ll stay the course.

You can check out the Kickstarter here  (they’ve already reached their funding goal, less than 24 hours after going live) and once mine shows up I’ll definitely be talking about it some more.

 

…Because their Lips are Moving

One of the earliest posts I did on this blog related to new technologies in truth-detection and as the political season is heating up again I thought it would be worthwhile to revisit some of those points.  (Here’s the original post)

In particular, I’m interested in the variety of non-invasive technologies that are becoming available to tell whether or not a person is consciously lying.  To a greater or lesser extent, most normal (non-sociopathic) people have some sort of physical manifestation whenever they intentionally lie.  This can manifest as micro-expressions, as fluctions in pitch, frequency and intensity of the voice, and even bloodflow to the face (which can be detected by an infrared camera.)

Are these technologies 100% reliable as lie-detectors?  Not even close.  But they’re also not completely without merit and can, particularly if they’re used in conjunction with other techniques, be very effective.

More importantly, they’re only going to get better – probably lots better. And so even though we may not have the technology yet to accurately and consistently detect when someone is lying, we will eventually be able to look back at the video/audio that is being captured today and determine, after the fact, whether or not the speaker was being truthful.   A bit of retroactive truth-checking, if you will.

In other words, even though we may not be able to accurately analyze the data immediately, we can definitely start collecting it. Infrared cameras are readily available, and microexpressions (which may occur over a span of less than 1/25th of a second) should be something that even standard video (at 30fps) would be able to catch and of course we’ve got much higher-speed cameras than that these days. And today’s cameras should also have plenty of resolution to grab the details needed, particularly if you zoom in on the subject and frame the face only.

So it seems to me that someone needs to plan on recording all the Presidential (and vice-presidential) debates with a nice array of infrared, high speed, and high-definition cameras. And they need to do it in a public fashion so that every one of these candidates is very aware of it and of why it is being done.

Or am I just being naïve in thinking that the fear of eventually being identified as a liar would actually cause people (or even politicians) to modify their current behavior? Maybe, but it seems like it’s at least worth a shot.

Watching Movies on the iPhone 5

I’ll admit it… I have, on occasion, actually watched a movie or two on my iPhone.  Forgive me David Lynch

(Hell, I’ll even condemn myself to a deeper circle of hell by admitting that I actually watched one of the most glorious wide-screen movies ever – Lawrence of Arabia* – on an iPhone while flying to Jordan earlier this year.   Of course I’ve seen it more than a few times already, but still…)

So even though it’s far from my preferred viewing scenario, all this talk about the alleged size-change of the next-generation iPhone got me to thinking about how that extra real-estate would affect movie-watching on such a device.

The rumormongers all seem to have hit consensus that the next iPhone will keep the same width of 640 pixels, but will extend the height from the current 4S value of 960 up to 1137 pixels.  Do the math on that and you’ll find that the screen is about 18% bigger.  ( i.e. [640*1136]/[640*960] = 1.18333 times larger.)

But there’s more to the story when you actually sit down to watch a movie, because every movie has a specific aspect-ratio  that will be fit into the screen you’re viewing it on.  So if we’re looking at something like The Godfather on our iPhone 4S (assuming our digital file is in the correct 1.85:1 aspect ratio that the original movie was shot in), it will be scaled to be the maximum width of the display and then ‘letterboxed’ top and bottom with black.  Here’s an example of what this looks like (only instead of letterboxing with black bars I’ve made them a dark gray so you can see them better).

Now let’s look at the same movie on a taller (which, when we turn it sideways becomes wider) new device.

The aspect ratio of the movie stays the same but because the aspect ratio of the phone is much wider the image fits much better into the space we’ve got.  And thus the letterbox bars on top and bottom are much smaller.

Raiders of the Lost Ark?

Much nicer, eh?  In fact, even though the display is only 18% bigger, the fact that our widescreen movies fit so much nicer into the frame actually means that they’re a whopping 40% bigger.

Of course if you’re watching Gilligan’s island – a show that was shot in the typical television aspect ratio of 1:33:1 (i.e. 4:3) it doesn’t buy you anything because the other dimension is the limiting factor.

(Not to denigrate 4:3 by associating it only with Gilligan, by the way. Casablanca, Citizen Kane, Wizard of Oz… all 4:3)

But for the most part this new aspect ratio is a really nice perk for the movie-watcher.  Here’s one more example using Game of Thrones which was shot with the HDTV standard aspect ratio of 16:9.  This is almost exactly the aspect ratio of the alleged new iPhone and so it will now fit perfectly (i.e. without any letterboxing at all).

Personally, I’m looking forward to it.  Just don’t tell David Lynch.

(Caveat:  It’s worth noting that, depending on where you get your movies from, you might find different aspect ratios than what I’ve mentioned above.  Many movies are remastered to different aspect ratios depending on their intended release platform – DVD, iTunes, whatever.  If you’re watching a rip of a DVD from 1990, all bets are off)

Also, completely unrelated to this article, feel free to go grab a copy of my FreezePaint iPhone app, which is already tonz o’ fun and which will be at least 40% MORE FUN on the iPhone 5.  Guaranteed.

(*speaking of Lawrence of Arabia, while I was grabbing some images for this blogpost I came across this, which is completely unrelated to aspect ratios or iphones but was just too awesome not to post.  Click to make bigger.  Didn’t find it with any attribution so if anybody knows who did it I’ll be happy to credit them).

Apple Tablet Prototype

Image

Just saw this article on AppleInsider where a photo of one of the original apple tablets prototypes is shown. Quick story about my very brief interaction with this thing.

Although the article says that it may have been developed sometime between 2002 and 2004, I’m almost positive that the date I saw it would have been late 2004 at the earliest and more likely 2005.

They brought a few of us from the pro-apps group – UI- and Product-Designer types – into this little windowless room in Cupertino and there was a cardboard box sitting on the table. Once they were sure the door was locked and they made it very clear that everything we were going to see would not be discussed outside of this room, they lifted the box off to reveal something that looked very much like what’s shown in the photo above. Pretty much the same footprint as the then-current 12″ Aluminum PowerBook G4 but thinner and with the white polycarbonate case.

The reason we were being brought in to talk about it was because they wanted to get people coming up with a variety of multi-touch gestures that might be useful. One of the guys in our group (whom I won’t mention by name since he still works there) spent a bunch of time generating cool ideas that were then fed back into the secret machine.

It was only a few years later when the phone was announced that I realized this was, yet again, a bit of masterful misdirection by Apple – something they do even within the company. By this point they were almost certainly already working on the phone but rather than show us that they put the prototype tablet in front of us instead, thereby limiting the number of people who had the real story. (Being a part of pro-apps rather than one of the more core groups meant we were by definition a bit more on the periphery so this wasn’t a suprise.)

It also shows how they were already in the process of generating patents on multi-touch gestures. Interesting to note that this must have been right around the same time that Apple acquired Fingerworks.

 

CAMERA FEATURES I WANT (And that really really shouldn’t be all that hard to implement)

Since camera manufacturers continue to annoy me with their unwillingness to add the features I want (yes, I’m taking this personally) I guess I’m about due for another blogpost ranting/whining about it.  I’ve covered some of these things before in this blog (and quite often on the This Week in Photo podcast) but it seems worth having an up-to-date list of some obvious stuff.

Let’s set the ground rules first.  I’m not going to be talking about radical scientific breakthroughs here.  ISO 1billion at 100 Megapixels (capturing a dynamic range that includes both coronal activity on the sun and shadow details on a piece of coal… all in the same photo) will be cool, definitely.  But there’s some fundamental physics that still need to be dealt with before we get there.  No, all I’m asking for today are engineering – often software – features.  Things that really only require desire/resources and design/integration.

At this point it’s no secret that more people take photos with their phones than with dedicated cameras.  Obviously the main reason for this is because The Best Camera Is The One That’s With You, coupled with the paradigm-shift that comes with network connectivity to friends and social networks.  As an iPhone developer I know how crazy it would be to create a fun app – even one as fun as FreezePaint – without including the ability to share the images to Facebook/Twitter/Flickr/Instagram once they’ve been taken.

But beyond all that, what else does a ‘camera’ like the iPhone give us?  Tons, really, and the bottom line is that you can do photo things with your phone that no ‘traditional’ camera can touch.  (And this in spite of the fact that the iOS development environment is woefully poor at giving deep-down access to the camera hardware).  Because camera phones are an open platform, unbounded by a single company’s vision for what a camera can or should do.

So let’s postulate some basics – a standalone camera with a good sensor, a nice array of lens options, and a reasonably open operating system under the hood that allows full programmatic access to as much of the hardware as possible.  What can we come up with?   (And a caveat before we continue:  Yes I know that there are some cameras that can already do some of the things I’m going to mention.  Some of the things.  Sort of.  We’ll talk about that too.)

First, a few more basic wishlist items on the hardware side of things.

– Radios. Access to the same data network that my phone can get, and also shorter-range communication via bluetooth, WiFi. (And while you’re at it, couldn’t you just stick something in there that grabs the local time via radio signal and sets my camera’s time automatically?  I’ve got a $20 watch that can do this…)

With network and remote communication in place, all sorts of things happen.  The ‘sharing’ stuff mentioned above of course, but also things like remote control of the focus and shutter via my phone (because the phone is getting a live video feed directly from the camera’s sensor).

Seriously, take a look at this little beauty of an app for controlling an Olympus E-M5 via iphone.  I love the features it supports. Timelapses with accelleration. Take timelapse photos based on distance moved (i.e. every 50 feet or something). Sound-detection to trigger shutter. Etc.  Only bummer is that it requires extra hardware on both phone and camera to communicate.

What else does having a networked camera give?  Hopefully I can buy flash devices that talk nicely too, via some standard transport like bluetooth or WiFi.  Radio Poppers and Pocket Wizards are great but these are professional-level tools that are way overkill (and overpriced) for what I need most of the time.  I just want to hold my camera in one hand and a flash unit (no bigger than a deck of cards, hopefully) in the other hand, extended off to the side to give a pleasing angle on the subject.

(Brief tangent on lights:  For a lot of closer-up work (not to mention video), it sure feels like continuous-light LED sources are starting to be a very viable alternative to xenon flash devices.  These too need to get remote-control friendly – take a look at the awesome kickstarter-funded ‘Kick’.  Sure would be cool, too, if I could buy a light/flash that could take the same batteries as my camera…but I’m getting way off-topic now).

– Sensors.  In this situation the word ‘sensor’ refers to all the auxiliary data that you may want to collect, i.e. accelerometers and gyroscopes and altimeters and compasses and GPS units.  Phone’s already got ‘em – put those in my camera too, please.  If you add it (and allow programmatic access to it), all sorts of cool things will start showing up, just as they have on phones.

For example, use information from the accelerometer to measure the camera’s movement and snap the photo when it’s most stable.  These accelerometers are sensitive enough that they can probably measure your heartbeat and fire the shutter between beats.

(It’s also worth noting that the main difficulty with algorithmically removing motion blur caused by camera shake, usually via deconvolution, is because of the uncertainty about exactly how the camera moved while the photo was being taken.  If we were to record accurate accelerometer readings over the duration of the time that the shutter was open and then store that in the EXIF data, it suddenly becomes much easier to remove camera-shake blur as a post-process).

Combine GPS with network connectivity so my camera’s positional data can be at least as accurate as my phone (i.e. A-GPS).  Also toss in a compass.

(Incidentally, the desire to have accurate location information isn’t just about remembering where you were when you took a photo.  There’s the much cooler technology coming down the pike that will allow for re-creating the actual geometry of the place where you’re standing if you’ve got enough views of the scene.  Take a look at Microsoft’s photosynth, Autodesk’s 123D.   And thus the more people who are posting geotagged information on the world around them, the more we’ll be able to better characterize that world.)

Yeah, there will be a battery-drain penalty if I have all these sensors and radios operating constantly.  Big deal.  Let’s see how much a Canon 5DmkIII battery goes for on ebay these days… Oh look – about TEN DOLLARS.  For that amount I think I can manage to have a few spares on hand.

– Lots of buttons and dials, sensibly placed.  More importantly – allow me to reprogram the buttons and dials in any way I see fit.  Any of the buttons/dials, not just a couple of special-case ones.  (And touchscreens don’t eliminate the need for this.  Too fiddly when you’re trying to concentrate on just getting the shot.)  If you’re really sexy you’ll give me tiny e-ink displays for the button labels so I can reprogram the labels too…

– USB support.  Not just as a dumb device I can plug into a computer, but as a host computer itself.  Like I talked about here, it sure would be nice if my camera had host-device USB capabilities so I could just plug an external drive into it and offload all my photos onto some redundant storage while I’m traveling.  Without needing to carry a laptop with me as well.  (Or having to purchase some overpriced custom card reader with built-in storage.)

And… incidentally… how about letting me charge the battery in the camera by plugging the camera into a USB port.  I understood that there may be power issues (someone out there want to figure out the tradeoff for charge times?) but just give me the choice to leave the bigassed battery-charging cradles at home.  (Or, more specifically, to not be F’d when I accidentally forget it at home!)

(In terms of wired connectivity there’s also Thunderbolt to consider but it smells like it’s a few more years out.)

Finally, for the super-geeks, it sure would be cool to get access to the hardware at a really deep level.  I’m talking stuff like allowing me to play with the scanning rate of the rolling shutter, for instance, if I want to make some funky super-jellocam images.

Okay, let’s dive into software-only stuff.  Here are a few things that would be almost trivial to implement if I had even basic programmatic access to my camera’s hardware.

– Allow me to name files sensibly and according to whatever scheme I want.  Right now I rename my images as soon as they’re loaded onto my computer (based on the day/time the photo was taken) but why can’t my camera just do that for me automatically?  And do it according to any scheme I care to specify.

– Timelapse/Long Exposure.  This is such a simple feature from a software perspective yet many (probably most) cameras still don’t support it.  For my DSLR I need to buy an external trigger with timer capabilities.  Why?  (Other than the obvious answer, which is because it allows the camera manufacturers to charge a ridiculous amount of money for such a remote trigger.  Hint – buy an ebay knockoff instead).

– Motion detection.  It’s a easy enough algorithm to detect movement and only trigger a photo when that occurs.  Yes, this would make it easy to set up a camera to see if the babysitter or the house-painters wander into rooms they shouldn’t, but it would also be very interesting to use with, for example, wildlife photography.

– Let me play with the shutter/flash timing.  Allow me to do multiple-flashes over the duration of a long exposure, for instance, to get interesting multiple-exposure effects.

– Give me programmatic control over the autofocus and the zoom (if it’s servo-controlled), so I can shoot bracketed focus-points or animate the zoom while the shutter is open for interesting effects.  I mean Lytro [link] is cool and all, but most of the time I don’t want to radically refocus my image, I just want to tweak a slightly missed shot where autofocus grabbed the bridge of the nose instead of the eyes.  If my camera had automatically shot 3 exposures with focus bracketing I’d be able to pick the best, but also I’d be able to use some simple image manipulation to stack them and increase the DOF.

(Brief sidetrack here.  Yes, some of the coolcool stuff will require fairly sophisticated algorithms to accomplish properly in a lot of situations.  So what?  There are, in fact, a lot of sophisticated algorithms out there.  There’s also a lot of stuff that can be done without sophisticated algorithms, just by using elbow grease and a few hours in Photoshop.  Give me the raw material to work with and I’ll figure out the rest!)

– Activate the microphone to do a voice annotation or capture some ambient sound with every photo.  Or for that matter, allow me to voice-activate the shutter.  ( “Siri, take a photo please.”)

Actually, let’s talk about that a bit more.  Voice-control for taking a photo may not be all that exciting, although I can certainly see situations where I’m trying to hold my camera in some strange position in order to get a particular point of view and the shutter-button has become difficult to reach easily.  (Example?  Camera on a tripod that I’m holding above my head to get a very high angle, with LCD viewfinder angled downward so I can see the framing.)   Where’s my voice-activated Digital Camera Assistant when I need her?

But beyond that, think about all the camera settings that you normally have to dig through a menu to find.  Custom-programmable buttons & dials are great, but there’s always going to be a limited number of them.  Being able to quickly tell the camera to adjust the ISO or turn on motor-drive (AKA burst or continuous mode) just might make the difference between getting the shot and not.

Finally, there’s a whole huge variety of general image-processing operations that could be applied in-camera, from custom sharpening algorithms to specialized color-corrections to just about anything else you currently need to do on your computer instead.  HDR image creation.  Panoramic stitching.  Tourist Removal.  Multiple-exposures to generate super-resolution images.  Etc., etc.  Are some of these better done as post-processes rather than in-camera?  Sure, probably.  But you could make the same claim about a lot of things – it’s why some people shoot RAW and others are happy with JPEG. Bottom line is that there are times where you just want to get a final photo straight out of the camera.

Having said that, let’s talk about the proper way to do some of these fancy things like creating HDR imagery.  There are already cameras that can, for example, auto-bracket and then create HDR images.  Sort of.  More specifically what they’re doing is internally creating an HDR image, then tone-mapping that wide dynamic range down to an image that looks ‘nice’ on a normal monitor, and then saving that out as a jpeg.  (‘Nice’, incidentally, being an arbitrary mapping that some dude in a cubicle came up with.  Or maybe a group of dudes in a conference room.  Whatever… it’s still their judgement on how to tone-map an image, not mine.)

Better thing to do?  Shoot an exposure-bracket of the scene and combine them (after auto-aligning as necessary) into a true high dynamic range image.  Save that as a high bit-depth TIFF file or DNG or (better) OpenEXR or something.  You can give me the jpeg too if you want, but don’t throw away all the useful data.  In other words, let me work with HDR images in the same way I work with RAW images because that’s exactly what a RAW file is… a somewhat limited high dynamic range file.

This mentality is the sort of thing I’d like to see for all sorts of in-camera software algorithms – give me something useful but don’t throw away data.

I could probably list dozens of additional image-processing algorithms that it would be cool to have in my camera (not to mention video-related tools).   Some of the features mentioned above may not feel terribly important or may even seem ‘gimmicky’, but all it takes is a single special-case situation where you need one of them and you’ll be glad they’re part of your software arsenal.

So who’s going to make a camera like this this?  Who’s going to make the hardware and who’s going to make the software/OS?  In terms of hardware I’m betting it won’t be Canon or Nikon – they’re too successful with their existing systems to create something as open as what I’m describing.  Probably not Sony either – they’ve always relied on proprietary, closed hardware too (look how long it took for them to realize that people were avoiding their cameras simply because they didn’t want to be tied to the Sony-only ‘memory stick’).

I guess I’m hoping that someone like Panasonic or Olympus steps up to the plate, mostly because I love the Micro 4/3 lens selection.  But maybe it will be a dark-horse 3rd party – a company that sees the opportunity here and is willing to invest in order to break into the digital camera marketplace with a huge splash.  Might even be someone like Nokia – does the recent 41 megapixel camera phone indicate a potential pivot from phone manufacturer to camera manufacturer?

In terms of software I know I’d love to see a full-featured camera running iOS, but realistically I suspect that a much more likely scenario is that we’ll get a camera running a version of Google’s Android OS.   Or hell, maybe Microsoft will step up to the plate here.  Crazier things have happened.

But bottom line is that it sure feels like someone is going to do these things, and hopefully soon.  And then the real fun will start, and we’ll see features and capabilities that nobody has thought of yet.  So you tell me – If you had full software control over your camera what would you do with it?

Announcing… FreezePaint

Image

Yeah I know, things have been a little quiet here on the blog lately. I’ve been head-down on a couple of projects that became rather all-consuming. But the good news is that one of them has finally come to fruition and so let’s talk about it.

Because, hey, I’ve just released my first app into the iOS App store!

It’s called FreezePaint and it’s pretty fun, if I do say so myself.

The website for the app is at http://www.freezepaintapp.com and that’ll give you a rough idea of what it’s all about*.  But if I had to do a quick summary description about it, I’d say it’s a sort of live on-the-fly photo-montaging compositing remixer thingy.  Which probably makes it sound more complicated than it is.  Here, watch the video:

Of course as anybody who’s done an app can tell you, getting it launched is only the beginning of the process.  Time to put on my sales&marketing hat.

I’ve had some really great advice from some really great people (thanks, really great people!) and one of the things I heard several times was that it’s extremely important to get that initial sales bump to be as large as possible.

So anybody that’s reading this who has 99cents in their pocket and is feeling either curious or charitable, I’d be HUGELY APPRECIATIVE if you could go and buy a copy of FreezePaint. More particularly, I’d be extra hugely appreciative if you’d go buy it, like, now. Heck I’ll even offer a money-back guarantee. If you really don’t like it – don’t think you’ve gotten 99cents worth of amusement out of it, then I’ll happily Paypal a buck back to you. Simple as that.

But beyond just buying it, I’m hoping you can help spread the word a little bit. Because that’s where the real traction will come from. Fortunately we’ve made it pretty darn easy to do that because once you set things up it’s just a single button-click to share via Twitter, Facebook, Flickr or email. (Or all four at the same time!)

(And if you like it, remember that good reviews are an app’s life-blood… here, let me make it easy by providing a link to where you can leave one…  Just click here and scroll down a little bit.)

I know this all sounds rather self-serving. It totally is! I want people to use FreezePaint. I want to see what sort of wacky images you can come up with!  At this point I’m not even sure what the most common usage is going to be!  Will people spend most of their time putting their friends’ faces on their dogs?  Or will it be more of a ‘scrapbooking’ thing – a quick way to collage together a fun event or location.  Beats me.  A few of the images that beta-testers have created are at http://favorites.freezepaintapp.com – send me something good/fun/interesting/disturbing that you’ve done with FreezePaint and there’s a pretty good chance it’ll end up there too!  (There’s a button at the bottom of the ‘sharing’ page in the app that shoots your images straight to my inbox.)

I’m sure I’ll be doing several more posts about all of this – about the process of how it all came together, about the pains of trying to find the right developer to work with (which I did, finally!), about the fun of dealing with the app store submission process, etc… :-)

But for now I’m just hoping that people check it out, tell their friends, and mostly have fun!

*Also, someone remind me to write a blog at some point about what an awesome tool http://www.unbounce.com is for building product websites like this – save me tons of time.